I have a quite complex struct that contains many interfaces with each different implementations. For en/decoding that struct in gob I seem to have to register every implementation that could be possibly used for every interface. So I end up with a method along these lines:
func registerImplementations() {
gob.Register(&Type1{})
gob.Register(&Type2{})
gob.Register(&Type3{})
gob.Register(&Type4{})
....
}
which I need to call before en/decoding. Is there an easier way to do this? Or should I look into possibilities for generating this method, since it's quite tedious to keep track of all possible implementations?
The documentation says:
We must register the concrete type for the encoder and decoder (which would
normally be on a separate machine from the encoder). On each end, this tells the
engine which concrete type is being sent that implements the interface.
So, at some point, you're going to want to call gob.Register, but you do want your code to be maintainable. This leaves (broadly) two options:
Creating a function like you're doing now, calling each struct after one another.
Advantage: all your Register-calls in a list, so you'll easily spot if you miss one, and you surely won't register one twice.
Disadvantage: you'll have to update it when using another implementation. You'll also have to call this function some time before encoding/decoding.
Creating something like this:
func Register(i interface{}) error {
gob.Register(i)
return nil
}
And then when writing a new implementation in your (let's say) dummy package, you can put this line below / above the interface declaration.
var regResult = reg.Register(myNewInterface{})
This will be called on startup (because it's global).
Advantage: not having to update the registerImplementations method.
Disadvantage: you'll have your registers all across your code (which can consist of a lot of files) - so you might miss one.
As to which is better: I'll leave that up to you.
Related
I'm on my first Golang project, which consists of a small router for an MVC structure. Basically, what I hope it does is take the request URL, separate it into chunks, and forward the execution flow to the package and function specified in those chunks, providing some kind of fallback when there is no match for these values in the application .
An alternative would be to map all packages and functions into variables, then look for a match in the contents of those variables, but this is not a dynamic solution.
The alternative I have used in other languages (PHP and JS) is to reference those names sintatically, in which the language somehow considers the value of the variable instead of considering its literal. Something like: {packageName}.{functionName}() . The problem is that I still haven't found any syntactical way to do this in Golang.
Any suggestions?
func ParseUrl(request *http.Request) {
//out of this URL http://www.example.com/controller/method/key=2&anotherKey=3
var requestedFullURI = request.URL.RequestURI() // returns '/controller/method?key=2key=2&anotherKey=3'
controlFlowString, _ := url.Parse(requestedFullURI)
substrings := strings.Split(controlFlowString.Path, "/") // returns ["controller","method"]
if len(substrings[1]) > 0 {
// Here we'll check if substrings[1] mathes an existing package(controller) name
// and provide some error return in case it does not
if len(substrings[2]) > 0 {
// check if substrings[2] mathes an existing function name(method) inside
// the requested package and run it, passing on the control flow
}else{
// there's no requested method, we'll just run some fallback
}
} else {
err := errors.New("You have not determined a valid controller.")
fmt.Println(err)
}
}
You can still solve this in half dynamic manner. Define your handlers as methods of empty struct and register just that struct. This will greatly reduce amount of registrations you have to do and your code will be more explicit and readable. For example:
handler.register(MyStruct{}) // the implementation is for another question
Following code shows all that's needed to make all methods of MyStruct accessible by name. Now with some effort and help of reflect package you can support the routing like MyStruct/SomeMethod. You can even define struct with some fields witch can serve as branches so even MaStruct/NestedStruct/SomeMethod is possible to do.
dont do this please
Your idea may sound like a good one but believe me its not. Its lot better to use framework like go-chi that is more flexible and readable then doing some reflect madness that no one will understand. Not to mention that traversing type trees in go was newer the fast task. Your routes should not be defined by names of structures in your backend. When you commit into this you will end up with strangely named routes that use PascalCase instead of something-like-this.
What you're describing is very typical of PHP and JavaScript, and completely inappropriate to Go. PHP and JavaScript are dynamic, interpreted languages. Go is a static, compiled language. Rather than trying to apply idioms which do not fit, I'd recommend looking for ways to achieve the same goals using implementations more suitable to the language at hand.
In this case, I think the closest you get to what you're describing while still maintaining reasonable code would be to use a handler registry as you described, but register to it automatically in package init() functions. Each init function will be called once, at startup, giving the package an opportunity to initialize variables and register things like handlers and drivers. When you see things like database driver packages that need to be imported even though they're not referenced, init functions are why: importing the package gives it the chance to register the driver. The expvar package even does this to register an HTTP handler.
You can do the same thing with your handlers, giving each package an init function that registers the handler(s) for that package along with their routes. While this isn't "dynamic", being dynamic has zero value here - the code can't change after it's compiled, which means that all you get from being dynamic is slower execution. If the "dynamic" routes change, you'd have to recompile and restart anyway.
I'm learning Golang and as an exercise in using interfaces I'm building a toy program. I'm having some problem trying to use a type that "should implement" two interfaces - one way to solve that in C++ and Java would be to use inheritance(there are other techniques, but I think that is the most common). As I lack that mechanism in Golang, I'm not sure how to proceed about it. Below is the code:
var (
faces = []string{"Ace", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", "Ten", "Jack", "Queen", "King"}
suits = []string{"Hearts", "Diamonds", "Spades", "Clubs"}
)
type Card interface {
GetFace() string
GetSuit() string
}
type card struct {
cardNum int
face string
suit string
}
func NewCard(num int) Card {
newCard := card{
cardNum: num,
face: faces[num%len(faces)],
suit: suits[num/len(faces)],
}
return &newCard
}
func (c *card) GetFace() string {
return c.face
}
func (c *card) GetSuit() string {
return c.suit
}
func (c *card) String() string {
return fmt.Sprintf("%s%s ", c.GetFace(), c.GetSuit())
}
What I'm trying to achieve:
I would like to hide my struct type and only export the interface so that the clients of the code use only the "Card" interface
I would like to have a string representation of the struct, hence the implementation of the interface with the "String()" method in order to be able to call "fmt.Println()" on instantiations of my struct
The problem comes when I'm trying to use a new card though the "Card" interface and also trying to get the string representation. I cannot pass the interface as the parameter of the implementation of the "String()" method as there is a compiler error which is related to the addressability of an interface at the core language level(still digging through that documentation). The very simple example of testing exposes the issue:
func TestString(t *testing.T) {
card := NewCard(0)
assert.EqualValues(t, "AceHearts ", card.String(), " newly created card's string repr should be 'AceHearts '")
}
The compiler tells me, for good reason, that "card.String undefined (type card has no field or method string)". I could just add the "String()" method to my "Card" interface, but I do not find that to be clean: I have other entities implemented with the same model and I would have to add that redundancy everywhere; there is already an interface with that method.
What would be a good solution for the above issue that I'm having?
Edit:(to address some of the very good comments)
I do not expect to have another implementation of the Card interface; I'm not sure I grasp why would I want to do that, that is change the interface
I would like to have the Card interface to hide away implementation details and for the clients to program against the interface and not against the concrete type
I would like to always have access to the String() interface for all clients of the "card struct" instantiations(including the ones instantiated via the Card interface). I'm not interested in having clients only with the String interface. In some other languages this can be achieved by implementing both interfaces - multiple inheritance. I'm not saying that is good or wrong, or trying to start a debate about that, I'm just stating a fact!
My intent is to find out if the language has any mechanism to fulfill those requirements simultaneously. If that is not possible or maybe from the point of view of the design the problem should be tackled in a different manner, then I'm ready to be educated
Type assertions are very verbose and explicit and would expose implementation details - they have their places but I do not think they are appropriate in the situation I have
I should go over some prefacing points first:
Interfaces in Go are not the same as interfaces in other languages. You shouldn't assume that every idea from other languages should transfer over automatically. A lot of them don't.
Go has neither classes nor objects.
Go is not Java and Go is not C++. It's type system is significantly and meaningfully different than those languages.
From your question:
I would like to have the Card interface to hide away implementation details and for the clients to program against the interface and not against the concrete type
This is the root of your other problems.
As mentioned in the comments, I see this in multiple other packages and regard it as a particularly pesky anti-pattern. First, I will explain the reasons why this pattern is "anti" in nature.
Firstly and most pertinently, this point is proven by your very example. You employed this pattern, and it has resulted in bad effects. As pointed out by mkopriva, it has created a contradiction which you must resolve.
this usage of interfaces is contrary to their intended use, and you are not achieving any benefit by doing this.
Interfaces are Go's mechanism of polymorphism. The usage of interfaces in parameters makes your code more versatile. Think of the ubiquitous io.Reader and io.Writer. They are fantastic examples of interfaces. They are the reason why you can patch two seemingly unrelated libraries together, and have them just work. For example, you can log to stderr, or log to a disk file, or log to an http response. Each of these work exactly the same way, because log.New takes an io.Writer parameter, and a disk file, stderr, and http response writer all implement io.Writer. To use interfaces simply to "hide implementation details" (I explain later why this point fails), does not add any flexibility to your code. If anything, it is an abuse of interfaces by leveraging them for a task they weren't meant to fulfill.
Point / Counterpoint
"Hiding my implementation provides better encapsulation and safety by making sure all the details are hidden."
You are not achieving any greater encapsulation or safety. By making the struct fields unexported (lowercase), you have already prevented any clients of the package from messing with the internals of your struct. Clients of the package can only access the fields or methods that you have exported. There's nothing wrong with exporting a struct and hiding every field.
"Struct values are dirty and raw and I don't feel good about passing them around."
Then don't pass structs, pass pointers to struct. That's what you're already doing here. There's nothing inherently wrong with passing structs. If your type behaves like a mutable object, then pointer to struct is probably appropriate. If your type behaves more like an immutable data point, then struct is probably appropriate.
"Isn't it confusing if my package exports package.Struct, but clients have to always use *package.Struct? What if they make a mistake? It's not safe to copy my struct value; things will break!"
All you realistically have to do to prevent problems is make sure that your package only returns *package.Struct values. That's what you're already doing here. A vast majority of the time, people will be using the short assignment :=, so they don't have to worry about getting the type correct. If they do set the type manually, and the choose package.Struct by accident, then they will get a compilation error when trying to assign a *package.Struct to it.
"It helps to decouple the client code from the package code"
Maybe. But unless you have a realistic expectation that you have multiple existent implementations of this type, then this is a form of premature optimization (and yes it does have consequences). Even if you do have multiple implementations of your interface, that's still not a good reason why you should actually return values of that interface. A majority of the time it is still more appropriate to just return the concrete type. To see what I mean, take a look at the image package from the standard library.
When is it actually useful?
The main realistic case where making a premature interface AND returning it might help clients, is this:
Your package introduces a second implementation of the interface
AND clients have statically and explicitly (not :=) used this data type in their functions or types
AND clients want to reuse those types or functions for the new implementation also.
Note that this wouldn't be a breaking API change even if you weren't returning the premature interface, as you're only adding a new type and constructor.
If you decided to only declare this premature interface, and still return concrete types (as done in the image package), then all the client would likely need to do to remedy this is spend a couple minutes using their IDE's refactor tool to replace *package.Struct with package.Interface.
It significantly hinders the usability of package documentation
Go has been blessed with a useful tool called Godoc. Godoc automatically generates documentation for a package from source. When you export a type in your package, Godoc shows you some useful things:
The type, all exported methods of that type, and all functions that return that type are organized together in the doc index.
The type and each of its methods has a dedicated section in the page where the signature is shown, along with a comment explaining it's usage.
Once you bubble-wrap your struct into an interface, your Godoc representation is hurt. The methods of your type are no longer shown in the package index, so the package index is no longer an accurate overview of the package as it is missing a lot of key information. Also, each of the methods no longer has its own dedicated space on the page, making it's documentation harder to both find and read. Finally it also means that you no longer have the ability to click the method name on the doc page to view the source code. It's also no coincidence that in many packages that employ this pattern, these de-emphasized methods are most often left without a doc comment, even when the rest of the package is well documented.
In the wild
https://pkg.go.dev/github.com/zserge/lorca
https://pkg.go.dev/github.com/googollee/go-socket.io
In both cases we see a misleading package overview, along with a majority of interface methods being undocumented.
(Please note I have nothing against any of these developers; obviously every package has it's faults and these examples are cherry picked. I'm also not saying that they had no justification to use this pattern, just that their package doc is hindered by it)
Examples from the standard library
If you are curious about how interfaces are "intended to be used", I would suggest looking through the docs for the standard library and taking note of where interfaces are declared, taken as parameters, and returned.
https://golang.org/pkg/net/http/
https://golang.org/pkg/io/
https://golang.org/pkg/crypto/
https://golang.org/pkg/image/
Here is the only standard library example I know of that is comparable to the "interface hiding" pattern. In this case, reflect is a very complex package and there are several implementations of reflect.Type internally. Also note that in this case, even though it is necessary, no one should be happy about it because the only real effect for clients is messier documentation.
https://golang.org/pkg/reflect/#Type
tl;dr
This pattern will hurt your documentation, while accomplishing nothing in the process, except you might make it slightly quicker in very specific cases for clients to use parallel implementations of this type that you may or may not introduce in the future.
These interface design principles are meant for the benefit of the client, right? Put yourself in the shoes of the client and ask: what have I really gained?
Not entirely sure if this is what you are looking for but you could try embedding the other interface in Card interface as shown below.
type Printer interface {
String() string
}
type Card interface {
Printer // embed printer interface
GetFace() string
GetSuit() string
}
Interface Card hasn't method String, it doesn't matter, that underlying type card have it, because method is hidden from you (unless you access it via reflection).
Adding String() string method to Card will solve problem:
type Card interface {
GetFace() string
GetSuit() string
String() string
}
The go language does not have subtype polymorsphism. Therefore, the pattern you want to achieve is not encouraged by the very foundations of the language. You may achieve this undesirable pattern by composing structs and interfaces, though.
Let's say in a 3rd party library we have an interface and a struct implementing this interface. Let's also assume there is a function that takes ParentInterface as argument, which have different behavior for different types.
type ParentInterface interface {
SomeMethod()
}
type ParentStruct struct {
...
}
func SomeFunction(p ParentInterface) {
switch x := p.Type {
case ParentStruct:
return 1
}
return 0
}
In our code we want to use this interface, but with our augmented behavior, so we embed it in our own struct. The compiler actually allows us to call functions about ParentInterface on my struct directly:
type MyStruct struct {
ParentInterface
}
parentStruct := ParentStruct{...}
myStruct := MyStruct{parentStruct}
parentStruct.SomeMethod() // Compiler OK.
myStruct.SomeMethod() // Compiler OK. Result is same. Great.
SomeFunction(parentStruct) // Compiler OK. Result is 1.
SomeFunction(myStruct.ParentInterface) // Compiler OK. Result is 1.
SomeFunction(myStruct) // Compiler OK. Result is 0. (!)
Isn't the last case a problem? I've encountered this kind of bugs more than once. Because I'm happily use MyStruct as an alias of ParentInterface in my code (which is why I define it in the first place), it's so hard to always remember that we cannot call SomeFunction on MyStruct directly (the compiler says we can!).
So what's the best practice to avoid this kind of mistake? Or it's actually a flaw of the compiler, which is supposed to forbid the use of SomeFunction(myStruct) at all since the result is untrustable anyway?
There is no compiler mistake here and your experienced result is the expected one.
Your SomeFunction() function explicitly states it wants to do different things based on the dynamic type of the passed interface value, and that is exactly what happens.
We introduce interfaces in the first place so we don't have to care about the dynamic type that implements it. The interface gives us guarantees about existing methods, and those are the only things you should rely on, you should only call those methods and not do some type-switch or assertion kung-fu.
Of course this is the ideal world, but you should stick to it as much as possible.
Even if in some cases you can't fit everything into the interface, you can again type assert another interface and not a concrete type out of it if you need additional functionality.
A typical example of this is writing an http.Handler where you get the response writer as an interface: http.ResponseWriter. It's quite minimalistic, but the actual type passed can do a lot more. To access that "more", you may use additional type assertions to obtain that extra interface, such as http.Pusher or http.Flusher.
In Go, there is no inheritance and polymorphism. Go favors composition. When you embed a type into another type (struct), the method set of the embedded type will be part of the embedder type. This means any interfaces the embedded type implemented, the embedder will also implement those. And calling methods of those implemented interfaces will "forward" the call to the embedded type, that is, the receiver of those method calls will be the embedded value. This is unless you "override" those methods by providing your own implementation with the receiver type being the embedder type. But even in this case virtual routing will not happen. Meaning if the embedded type has methods A() and B(), and implementation of A() calls B(), if you provide your own B() on the embedder, calling A() (which is of the embedded type) will not call your B() but that of the embedded type.
This is not something to avoid (you can't avoid it), this is something to know about (something to live with). If you know how this works, you just have to take this into consideration and count with it.
Because I'm happily use MyStruct as an alias of ParentInterface in my code (which is why I define it in the first place)
You shouldn't use embedding to create aliases, that is a misuse of embedding. Embedding a type in your own will not be an alias. Implementations of existing methods that check concrete types will "fail" as you experienced (meaning they will not find a match to their expected concrete type).
Unless you want to "override" some methods or implement certain interfaces this way, you shouldn't use embedding. Just use the original type. Simplest, cleanest. If you need aliases, Go 1.9 introduced the type alias feature whose syntax is:
type NewType = ExistingType
After the above declaration NewType will be identical to ExistingType, they will be completely interchangeable (and thus have identical method sets). But know that this does not add any new "real" feature to the language, anything that is possible with type aliases is doable without them. It is mainly to support easier, gradual code refactoring.
Suppose that I have a type type T intand I want to define a logic to operate on this type.
What abstraction should I use and When ?
Defining a method on that type:
func (T t) someLogic() {
// ...
}
Defining a function:
func somelogic(T t) {
// ...
}
Some situations where you tend to use methods:
Mutating the receiver: Things that modify fields of the objects are often methods. It's less surprising to your users that x.Foo will modify X than that Foo(x) will.
Side effects through the receiver: Things are often methods on a type if they have side effects on/through the object in subtler ways, like writing to a network connection that's part of the struct, or writing via pointers or slices or so on in the struct.
Accessing private fields: In theory, anything within the same package can see unexported fields of an object, but more commonly, just the object's constructor and methods do. Having other things look at unexported fields is sort of like having C++ friends.
Necessary to satisfy an interface: Only methods can be part of interfaces, so you may need to make something a method to just satisfy an interface. For example, Peter Bourgon's Go intro defines type openWeatherMap as an empty struct with a method, rather than a function, just to satisfy the same weatherProvider interface as other implementations that aren't empty structs.
Test stubbing: As a special case of the above, sometimes interfaces help stub out objects for testing, so your stub implementations might have to be methods even if they have no state.
Some where you tend to use functions:
Constructors: func NewFoo(...) (*Foo) is a function, not a method. Go has no notion of a constructor, so that's how it has to be.
Running on interfaces or basic types: You can't add methods on interfaces or basic types (unless you use type to make them a new type). So, strings.Split and reflect.DeepEqual must be functions. Also, io.Copy has to be a function because it can't just define a method on Reader or Writer. Note that these don't declare a new type (e.g., strings.MyString) to get around the inability to do methods on basic types.
Moving functionality out of oversized types or packages: Sometimes a single type (think User or Page in some Web apps) accumulates a lot of functionality, and that hurts readability or organization or even causes structural problems (like if it becomes harder to avoid cyclic imports). Making a non-method out of a method that isn't mutating the receiver, accessing unexported fields, etc. might be a refactoring step towards moving its code "up" to a higher layer of the app or "over" to another type/package, or the standalone function is just the most natural long-term place for it. (Hat tip Steve Francia for including an example of this from hugo in a talk about his Go mistakes.)
Convenience "just use the defaults" functions: If your users might want a quick way to use "default" object values without explicitly creating an object, you can expose functions that do that, often with the same name as an object method. For instance, http.ListenAndServe() is a package-level function that makes a trivial http.Server and calls ListenAndServe on it.
Functions for passing behavior around: Sometimes you don't need to define a type and interface just to pass functionality around and a bare function is sufficient, as in http.HandleFunc() or template.Funcs() or for registering go vet checks and so on. Don't force it.
Functions if object-orientation would be forced: Say your main() or init() are cleaner if they call out to some helpers, or you have private functions that don't look at any object fields and never will. Again, don't feel like you have to force OO (à la type Application struct{...}) if, in your situation, you don't gain anything by it.
When in doubt, if something is part of your exported API and there's a natural choice of what type to attach it to, make it a method. However, don't warp your design (pulling concerns into your type or package that could be separate) just so something can be a method. Writers don't WriteJSON; it'd be hard to implement one if they did. Instead you have JSON functionality added to Writers via a function elsewhere, json.NewEncoder(w io.Writer).
If you're still unsure, first write so that the documentation reads clearly, then so that code reads naturally (o.Verb() or o.Attrib()), then go with what feels right without sweating over it too much, because often you can rearrange it later.
Use the method if you are manipulating internal secrets of your object
(T *t) func someLogic() {
t.mu.Lock()
...
}
Use the function if you are using the public interface of the object
func somelogic(T *t) {
t.DoThis()
t.DoThat()
}
if you want to change T object, use
func (t *T) someLogic() {
// ...
}
if you donn't change T object and would like a origined-object way , use
func (t T) someLogic() {
// ...
}
but remeber that this will generate a temporay object T to call someLogic
if your like the way c language does, use
func somelogic(t T) {
t.DoThis()
t.DoThat()
}
or
func somelogic(t T) {
t.DoThis()
t.DoThat()
}
one more thing , the type is behide the var in golang.
I have a class which could benefit with the state pattern. However the common "Replace Type Code with State/Strategy" refactoring does not seem to fit well in my case: the state is calculated by watching other objects, there is no type code variable.
Most of my class code is just "calculating" some state when it is called, and running the functions for that state.
Forcing a type code variable feels wrong because:
I will be forced to call an "updateState()" function in every place where the polymorphic functions are used.
My class will no longer be 100% behavior, which I would rather habe instead of some internal state.
Since the state must be calculated every single time its functions are called, I am wonder if I am thinking about the wrong pattern.
Normally I refactor this:
if (this.someOtherThingIsRunning()) {
...
} else {
...
}
like this:
typecode.doSomething()
// that being polymorphic
it seems strange doing:
updateTypeCode()
typecode.doSomething()
Does the state pattern applies to this case? Is there any alternative strategy pull from polymorphism without a type code?
While writing my question, I realized that maybe I could just make the type code a function and return a temporal (function scope) type code. Like:
typecode().doSomething()
This solution would never store the state, which is what I want to avoid. However I am still wondering if my problem started because I am using the wrong pattern.
If you're open to storing the state, maybe think about combining State and Observer to modify the state as the dependent classes change (rather than checking on every call). There's only certain models that this will work for though.
Otherwise you might as well say object.doSomething() and have the checks inside doSomething(). In this case using design patterns doesn't present any significant advantages (though if you loosen up slightly on the definitions of design patterns, many things would be considered such). I'd probably go with:
doSomething()
{
if (someOtherThingIsRunning())
doOneThing();
else
doAnotherThing();
}
The alternative (that you already suggested) is to have the above checks in typecode() and to return another class that contains the method doSomething().