How to write append "method" in go - go

I would like to explore Go possibilities. How would you turn "append" to method? simply: "st.app(t) == append(st, t)"
This is what I got:
type T interface{}
type ST []interface{}
func (st []T) app (t T) []T {
return(append(st, t))
}
but this code does not check for type compatibility:
append([]int{1,2},"a") // correctly gives error
ST{1,2}.app("a") // dumbly gives [1 2 a] !!!
I know why the code does not check for type compatibility but What is the right way to do it? Is it possible?
I appreciate your help to understand how Go works.

The right way to do this is to call append() as the built-in function it is. It's not an accident that it's a built-in function; you can't write append itself in Go, and anything that approximates it would be unsafe (and over-complicated).
Don't fight Go. Just write the line of code. In the time you spend trying to create tricky generics, you could have solved three more issues and shipped the product. That's how Go works.
This isn't to say Go will never have generics, but as we move towards Go 2 consider the following from Russ Cox:
For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve. As a result, I can't answer a design question like whether to support generic methods, which is to say methods that are parameterized separately from the receiver. If we had a large set of real-world use cases, we could begin to answer a question like this by examining the significant ones.
(On the other hand, Generics is the most responded-to topic in ExperienceReports, so it's not that no one is aware of the interest. But don't fight Go. It's a fantastic language for shipping great software. It is not, however, a generic language.)

4 years later, there is actually a possible solution to get methods that are parameterized separately from the receiver.
There is with Go 1.18 generics
It does not support parameterized method
It does support parameterized method, meaning the receiver may have type parameters
Jaana B. Dogan proposes "Generics facilitators in Go" based on that
Parameterized receivers are a useful tool and helped me develop a common pattern, facilitators, to overcome the shortcomings of having no parameterized methods.
Facilitators are simply a new type that has access to the type you wished you had generic methods on.
Her example:
package database
type Client struct{ ... }
type Querier[T any] struct {
client *Client
}
func NewQuerier[T any](c *Client) *Querier[T] {
return &Querier[T]{
client: c,
}
}
func (q *Querier[T]) All(ctx context.Context) ([]T, error) {
// implementation
}
func (q *Querier[T]) Filter(ctx context.Context, filter ...Filter) ([]T, error) {
// implementation
You can use it as:
querier := database.NewQuerier[Person](client)
In your case though, while you could define an Appender struct, it still needs to apply on a concrete struct, which is why using the built-in append() remains preferable.
But in general, should you need parameterized methods, Jaana's pattern can help.

Related

Could anybody provide your-own/standard convention of "clone method" that work well for my/our reference?

I can't find a "clone" method convention in Golang, but it seems necessary to have one.
I only saw the built-in way *clonedObj = *obj, but it is too low-level, and can't handle (when-necessary) deep copy of case like struct { member *CompositionObj }
I doubt whether "func (obj ClassA) Clone() interface{}" prototype will work, because calling obj2 := obj.Clone() will "loose" the method set for ClassA, and need explicit code like obj2.(*ClassA) afterwards.
Please advice a working direction.
This answer to a similar question regarding maps suggests to use the gob package. The documentation states:
A stream of gobs is self-describing. Each data item in the stream is preceded by a specification of its type, expressed in terms of a small set of predefined types. Pointers are not transmitted, but the things they point to are transmitted; that is, the values are flattened. Nil pointers are not permitted, as they have no value. Recursive types work fine, but recursive values (data with cycles) are problematic. This may change.
so it may not be suitable for your use case.
That said, your question largely depends on your actual use-case. You do not need a generic way to deep-copy things usually, you can usually either get away with the built-in copy mechanics or write concrete copy functions for the types that actually need it.
An alternative might be the deepcopy package but I have no experience with it myself, I just found it on Goolge.
Ok, having some while no one else give me proper reference, I have found out some reference example how to clone in Go myself and want to share.
(Only upvote me a few if this answer is useful to you. I'm not for earning votes. Welcome other better answers and comments)
I found this protoype in package "github.com/jinzhu/gorm" (Database's ORM library) for reference:
func (s *DB) clone() *DB {
db := &DB{
...
}
...
return db
}
And similar pattern in package "golang.org/x/net/html/atom":
func (n *Node) clone() *Node {
m := &Node{
Type: n.Type,
...
}
...
return m
}
The above prototype is enough if the Clone()'s caller always know your object type when cloning. (and you need uppercase Clone() to make the method to be "public")
However, if you want advanced feature that a variable may hold any object of similar base interface, here is my sample:
func (t *T) Clone() YourBaseInterface
Where YourBaseInterface is:
type YourBaseInterface interface {
Clone() YourBaseInterface
OtherMethod1()
...
}
Or can merely use interface{} instead of YourBaseInterface in the return, and do a typecast like obj2 := obj.Clone().(*YourBaseType) after clone.
CAUTION
There is one drawback with this prototype. Becase Golang doesn't support this prototype as build-in, the Clone() method won't be called in some language's feature, e.g. when you copy(dest, src) a []YourTypeWithClone slice. Instead, it still do plain *elem2 = *elem1 struct copying. Solutions maybe either don't use those build-in, or you may flaw back to design the class struct members so that doing plain copy is enough for its copy purpose if possible.

What is the preferred way to implement testing mocks in Go?

I am building a simple CLI tool in in Go that acts as a wrapper for various password stores (Chef Vault, Ansible Vault, Hashicorp Vault, etc). This is partially as an exercise to get familiar with Go.
In working on this, I came across a situation where I was writing tests and found I needed to create interfaces for many things, just to have the ability to mock dependencies. As such, a fairly simple implementation seems to have a bunch of abstraction, for the sake of the tests.
However, I was recently reading The Go Programming Language and found an example where they mocked their dependencies in the following way.
func Parse() map[string]string {
s := openStore()
// Do something with s to parse into a map…
return s.contents
}
var storeFunc = func openStore() *Store {
// concrete implementation for opening store
}
// and in the testing file…
func TestParse(t *testing.T) {
openStore := func() {
// set contents of mock…
}
parse()
// etc...
}
So for the sake of testing, we store this concrete implementation in a variable, and then we can essentially re-declare the variable in the tests and have it return what we need.
Otherwise, I would have created an interface for this (despite currently only having one implementation) and inject that into the Parse method. This way, we could mock it for the test.
So my question is: What are the advantages and disadvantages of each approach? When is it more appropriate to create an interface for the purposes of a mock, versus storing the concrete function in a variable for re-declaration in the test?
For testing purposes, I tend to use the mocking approach you described instead of creating new interfaces. One of the reasons being, AFAIK, there are no direct ways to identify which structs implement an interface, which is important to me if I wanted to know whether the mocks are doing the right thing.
The main drawback of this approach is that the variable is essentially a package-level global variable (even though it's unexported). So all the drawbacks with declaring global variables apply.
In your tests, you will definitely want to use defer to re-assign storeFunc back to its original concrete implementation once the tests completed.
var storeFunc = func *Store {
// concrete implementation for opening store
}
// and in the testing file…
func TestParse(t *testing.T) {
storeFuncOriginal := storeFunc
defer func() {
storeFunc = storeFuncOriginal
}()
storeFunc := func() {
// set contents of mock…
}
parse()
// etc...
}
By the way, var storeFunc = func openStore() *Store won't compile.
There is no "right way" of answering this.
Having said this, I find the interface approach more general and more clear than defining a function variable and setting it for the test.
Here are some comments on why:
The function variable approach does not scale well if there are several functions you need to mock (in your example it is just one function).
The interface makes more clear which is the behaviour being injected to the function/module as opposed to the function variable which ends up hidden in the implementation.
The interface allows you to inject a type with a state (a struct) which might be useful for configuring the behaviour of the mock.
You can of course rely on the "function variable" approach for simple cases and use the "interface" for more complex functionality, but if you want to be consistent and use just one approach I'd go with the "interface".
I tackle the problem differently. Given
function Parse(s Store) map[string] string{
// Do stuff on the interface Store
}
you have several advantages:
You can use a mock or a stub Store as you see fit.
Imho, the code becomes more transparent. The signature alone makes clear that a Store implementation is required. And the code does not need to be polluted with error handling for opening the Store.
The code documentation can be kept more concise.
However, this makes something pretty obvious: Parse is a function which can be attached to a store, which most likely makes more sense than to parse the store around.

Why Go design not to mark as error when assign a variable to whatever interface that has same signature

I'm just start learning Go, most of my background came from Java, Ruby.
I just wonder that, from my example, why Go language designer intentionally design to not specify an interface type when implement an interface. So that if someone accidentally try to assign an object to an interface that match a signature but doesn't intend to implement that interface, that would lead to be a bug and cannot be catch at a compile time.
You can try at this: https://play.golang.org/p/1N0kg7m4eE
package main
import "fmt"
type MathExpression interface {
calculate() float64
}
type AreaCalculator interface {
calculate() float64
}
type Square struct {
width, height float64
}
//This implementation intend to implement AreaCalculator interface
func (s Square) calculate() float64 {
return s.width * s.height
}
func main() {
//Suppose that a developer got this object from
//somewhere without a knowledge that object is a Square struct
mysteryStruct := Square{width: 4, height: 3}
var areaCalculator AreaCalculator = mysteryStruct
fmt.Println("Area of something:", areaCalculator.calculate())
var mathExpression MathExpression = mysteryStruct
fmt.Println("This should not work:", mathExpression.calculate())
}
One advantage of "implicitly satisfied" interfaces (ones where types don't need to explicitly declare that they implement them) is that interfaces can be created after the fact. You can see several types that have a method in common, perhaps in different packages, and decide to write a function that can accept any of them by calling for a new interface that specifies that method.
Also, Go's approach to interfaces lets you write an abstraction of the behavior of a type in an existing package, without modifying the original package. The first example that comes to my mind is the File interface in the net/http package:
type File interface {
io.Closer
io.Reader
io.Seeker
Readdir(count int) ([]os.FileInfo, error)
Stat() (os.FileInfo, error)
}
It represents a file that can by served by an http.FileServer. Normally it is an os.File, but it could be anything that fulfills the interface. For example, I think someone has made an implementation that serves files out of a zip archive.
Since the net/http package is defined in the standard library, it might have been possible to explicitly declare that os.File implements http.File—but it would have made the os package depend on the net/http package. This is a circular dependency, because net/http depends on os.
In an inheritance-based language, someone who was trying to do this would probably just give up on using an interface, and make http.FileServer require that all files be subclasses of os.File. But this would be a pain, because they wouldn't need any of the implementation of an os.File; they would just be inheriting from it to satisfy the type system.
The example in the OP works because of a poorly-chosen method name. It is not at all clear what a Square's calculate method should return. Its area? Its perimeter? Its diagonal? If the method were named CalculateArea, which would be the idiomatic name for the single method in an interface named AreaCalculator, it would not be possible to confuse an AreaCalculator and a MathExpression.
Go uses implicit interfaces, it means that type implements interface if it has all functions required by interface. Note that you don't have to declare in code that type implements interface (as in Java).
Such language design have good and bad consequences:
type can implement interface by accident (this happened in your example). Actually it happens very rarely so IMO it is not a big problem.
without interface declaration in code you don't know which interfaces your type implements. This problem can be solved by using good IDEs.
you can easily create adapters and similar design patterns
you can easily create mocks
more readable code (type declaration is very simple in go)
One benefit of this approach is inversion of dependencies. In typical languages with static type systems, you have source level dependency from implementation to interface. Which means you can't deploy them separately. With implicit interfaces you have no source level dependency and implementation module can be deployed/developed/build without module containing interface. This gives you flexibility usually reserved for dynamic type systems.

Does Go support operator overloading for builtin types like map and slice?

In python, I can define types that override list item access and dict value access by defining __getitem__(). Can I do something similar in Go?
// What I mean is:
type MySlice []MyItem
// Definition of MySlice
......
func (s MySlice) getItem(i int) MyItem {
}
......
// Access is overrided with calling getItem()
item := ms[0] //calling ms.getItem(0)
// Is this doable?
No, operator overloading is not a feature of Go.
Quoting from the official FAQ to explain why:
Method dispatch is simplified if it doesn't need to do type matching as well. Experience with other languages told us that having a variety of methods with the same name but different signatures was occasionally useful but that it could also be confusing and fragile in practice. Matching only by name and requiring consistency in the types was a major simplifying decision in Go's type system.
Regarding operator overloading, it seems more a convenience than an absolute requirement. Again, things are simpler without it.

Designing Go packages: when I should define methods on types?

Suppose that I have a type type T intand I want to define a logic to operate on this type.
What abstraction should I use and When ?
Defining a method on that type:
func (T t) someLogic() {
// ...
}
Defining a function:
func somelogic(T t) {
// ...
}
Some situations where you tend to use methods:
Mutating the receiver: Things that modify fields of the objects are often methods. It's less surprising to your users that x.Foo will modify X than that Foo(x) will.
Side effects through the receiver: Things are often methods on a type if they have side effects on/through the object in subtler ways, like writing to a network connection that's part of the struct, or writing via pointers or slices or so on in the struct.
Accessing private fields: In theory, anything within the same package can see unexported fields of an object, but more commonly, just the object's constructor and methods do. Having other things look at unexported fields is sort of like having C++ friends.
Necessary to satisfy an interface: Only methods can be part of interfaces, so you may need to make something a method to just satisfy an interface. For example, Peter Bourgon's Go intro defines type openWeatherMap as an empty struct with a method, rather than a function, just to satisfy the same weatherProvider interface as other implementations that aren't empty structs.
Test stubbing: As a special case of the above, sometimes interfaces help stub out objects for testing, so your stub implementations might have to be methods even if they have no state.
Some where you tend to use functions:
Constructors: func NewFoo(...) (*Foo) is a function, not a method. Go has no notion of a constructor, so that's how it has to be.
Running on interfaces or basic types: You can't add methods on interfaces or basic types (unless you use type to make them a new type). So, strings.Split and reflect.DeepEqual must be functions. Also, io.Copy has to be a function because it can't just define a method on Reader or Writer. Note that these don't declare a new type (e.g., strings.MyString) to get around the inability to do methods on basic types.
Moving functionality out of oversized types or packages: Sometimes a single type (think User or Page in some Web apps) accumulates a lot of functionality, and that hurts readability or organization or even causes structural problems (like if it becomes harder to avoid cyclic imports). Making a non-method out of a method that isn't mutating the receiver, accessing unexported fields, etc. might be a refactoring step towards moving its code "up" to a higher layer of the app or "over" to another type/package, or the standalone function is just the most natural long-term place for it. (Hat tip Steve Francia for including an example of this from hugo in a talk about his Go mistakes.)
Convenience "just use the defaults" functions: If your users might want a quick way to use "default" object values without explicitly creating an object, you can expose functions that do that, often with the same name as an object method. For instance, http.ListenAndServe() is a package-level function that makes a trivial http.Server and calls ListenAndServe on it.
Functions for passing behavior around: Sometimes you don't need to define a type and interface just to pass functionality around and a bare function is sufficient, as in http.HandleFunc() or template.Funcs() or for registering go vet checks and so on. Don't force it.
Functions if object-orientation would be forced: Say your main() or init() are cleaner if they call out to some helpers, or you have private functions that don't look at any object fields and never will. Again, don't feel like you have to force OO (à la type Application struct{...}) if, in your situation, you don't gain anything by it.
When in doubt, if something is part of your exported API and there's a natural choice of what type to attach it to, make it a method. However, don't warp your design (pulling concerns into your type or package that could be separate) just so something can be a method. Writers don't WriteJSON; it'd be hard to implement one if they did. Instead you have JSON functionality added to Writers via a function elsewhere, json.NewEncoder(w io.Writer).
If you're still unsure, first write so that the documentation reads clearly, then so that code reads naturally (o.Verb() or o.Attrib()), then go with what feels right without sweating over it too much, because often you can rearrange it later.
Use the method if you are manipulating internal secrets of your object
(T *t) func someLogic() {
t.mu.Lock()
...
}
Use the function if you are using the public interface of the object
func somelogic(T *t) {
t.DoThis()
t.DoThat()
}
if  you want to change T object, use
func (t *T) someLogic() {
// ...
}
if you donn't change T object and would like a origined-object way , use
func (t T) someLogic() {
// ...
}
but remeber that this will generate a temporay object T to call someLogic
if your like the way c language does, use
func somelogic(t T) {
t.DoThis()
t.DoThat()
}
or
func somelogic(t T) {
t.DoThis()
t.DoThat()
}
one more thing , the type is behide the var in golang.

Resources