Calling an exported method on a unexported field - go

I know, similar questions have been asked, but I found no answer for that case:
type ExportedStruct struct{ //comes from a dependency, so I can't change it
unexportedResource ExportedType
}
I want to call an exported method Close() on unexportedResource.
What I did was:
rs := reflect.ValueOf(myExportedStructPtr).Elem() //myExportedStructPtr is a pointer to an ExportedStruct object
resourceField := rs.FieldByName("unexportedResource")
closeMethod := resourceField.MethodByName("Close")
closeMethod.Call([]reflect.Value{reflect.ValueOf(context.Background())})
, which results in reflect.flag.mustBeExported using value obtained using unexported field.
This is quite annoying since I want to run more than one test which utilizes ExportedStruct, but I can't as long as the underlying resource is not used.
Since I can access private fields (as explained here) I have a bit hope that I'm allowed to access the public method of that field somehow, too. Maybe I'm just reflecting wrong?

Unexported fields are for the declaring package only. Stop messing with them. They are not for you.
The linked answer can only access it by using package unsafe, which is not for everyday use. Package unsafe should come with a "not to touch" manual.
If you do need to access unexportedResource, make it exported. Either the field, or add a method to the type that calls unexportedResource.Close(). Or add a utility function to the package that does this (functions in the same package can access unexported fields and identifiers).

While #icza's answer gives you reason why you should not do it, here is a way of how to do it using reflect and unsafe:
var t pkg.T
v := reflect.ValueOf(&t).Elem()
f := v.FieldByName("t")
rf := reflect.NewAt(f.Type(), unsafe.Pointer(f.UnsafeAddr())).Elem()
rf.MethodByName("Print").Call(nil)
playground: https://play.golang.org/p/CmG9e4Bl9gg

I am afraid that what you are trying to do is impossible through reflection.
Below is the implementation of reflect.Call:
func (v Value) Call(in []Value) []Value {
v.mustBe(Func)
v.mustBeExported()
return v.call("Call", in)
}
As you can see there is an explicit check (i.e. mustBeExported()) if Value was obtained from an exported field or not.
Typically there is a reason why fields are not exported. If you want to manipulate that field you will have to use methods implemented by the ExportedStruct struct.
If you can modify the code where ExportedStruct is defined, you can easily implement a wrapper Close method on that. For example:
type ExportedStruct struct{
unexportedResource ExportedType
}
func (e ExportedStruct) Close(){
e.unexportedResource.Close()
}

Related

How to write idiomatic constructor

I'm confused about the constructors in Go. Most constructors I've seen return a struct, but 'Effective Go' suggests that an interface can be returned in some cases, according to the rule of 'Generality'.
I trust 'Effective Go' to provide good ideas, but this doesn't seem to follow the principle of 'accept interfaces, return structs'. I guess that many types implement an interface and nothing more than that, so in that case it would be common to see constructors which return interfaces.
Another related statement is that interfaces should be defined by the consumer, but 'Generality' means that the interface is defined by the producer.
Can someone clarify?
As it has already been mentioned, returning an interface should be considered something exceptional.
Returning errors of type error which is an interface is one of those exception.
Returning an interface that represents an unexported type is the other exception. But why would you have an exported interface that describes an unexported struct instead of just having an exported struct?
The reason is simple, that allows you a higher degree of control on how that struct is constructed.
Compare this two pieces of code:
type MyType struct {
MyField string
}
func NewMyType(value string) MyType {
return MyType{value}
}
func (t MyType) MyMethod() string {
return t.MyField
}
type MyType interface {
MyMethod() string
}
type myType struct {
MyField string
}
func NewMyType(value string) MyType {
return myType{value}
}
func (t myType) MyMethod() string {
return t.MyField
}
In the first case I would be able to do: myVar := MyType{} while in the second case I won't be able to do so, I am forced to use the provided constructor. The first case also allows to modify the field value after creation which is not allowed in the second case. Making the field unexported will solve the second part but not the first.
This example is obviously trivial, but being able to construct invalid structs may have a horrible impact. By having specific constructors you can ensure that the object is in a valid starting state and you will only need to make sure it always stays in a valid state. If you can't ensure that, you may need to check that it is in a valid state at the start of every method.
For example consider a DB request. It needs a DB connection. If the user is able to create a DB request without a DB connection you will have to check that it is valid in every method. If you enforce him to use a constructor you can check at creation time and done.
It depends a bit on your preference and how you view things. Coming from OOP background my take is: there is no point in the constructor if you cannot enforce it. Adding the constructor means - you must supply these values when instantiating this item. If your struct is public, it will be misused and instantiated bypassing the constructor. So it makes sense that the constructor returns the public interface and the struct is private (lowercase). If the struct is public, there is no point in the constructor, because you cannot enforce it. Writing code is a dialogue between writer and reader, making a struct public and having a constructor would tell the reader - here you have the constructor, but you also have a public struct and that would mean that constructor usage is arbitrary. If that is the case, go with that setup
In most cases constructor functions return concrete types (or pointer to a type).
The situations in which returning interfaces might be a good idea is when calling factory functions or builder functions in which underlying concrete type satisfies that interface.
Consider error interface for example, when you call http.NewRequest underlying concentrate error type can be of net.Error, net.DNSError etc. Now try to think how are you going to create an api like this without an error interface if function returns concrete type? Only solution to it I can think of is to create a massive error type for net package and add fields for extra information, but its most probably much harder to maintain, test that kind of error type and not to mention memory bloat.
Whether you choose to return concrete type or an interface is a design choice, some guidelines exists to give solution to common scenarios.

Could anybody provide your-own/standard convention of "clone method" that work well for my/our reference?

I can't find a "clone" method convention in Golang, but it seems necessary to have one.
I only saw the built-in way *clonedObj = *obj, but it is too low-level, and can't handle (when-necessary) deep copy of case like struct { member *CompositionObj }
I doubt whether "func (obj ClassA) Clone() interface{}" prototype will work, because calling obj2 := obj.Clone() will "loose" the method set for ClassA, and need explicit code like obj2.(*ClassA) afterwards.
Please advice a working direction.
This answer to a similar question regarding maps suggests to use the gob package. The documentation states:
A stream of gobs is self-describing. Each data item in the stream is preceded by a specification of its type, expressed in terms of a small set of predefined types. Pointers are not transmitted, but the things they point to are transmitted; that is, the values are flattened. Nil pointers are not permitted, as they have no value. Recursive types work fine, but recursive values (data with cycles) are problematic. This may change.
so it may not be suitable for your use case.
That said, your question largely depends on your actual use-case. You do not need a generic way to deep-copy things usually, you can usually either get away with the built-in copy mechanics or write concrete copy functions for the types that actually need it.
An alternative might be the deepcopy package but I have no experience with it myself, I just found it on Goolge.
Ok, having some while no one else give me proper reference, I have found out some reference example how to clone in Go myself and want to share.
(Only upvote me a few if this answer is useful to you. I'm not for earning votes. Welcome other better answers and comments)
I found this protoype in package "github.com/jinzhu/gorm" (Database's ORM library) for reference:
func (s *DB) clone() *DB {
db := &DB{
...
}
...
return db
}
And similar pattern in package "golang.org/x/net/html/atom":
func (n *Node) clone() *Node {
m := &Node{
Type: n.Type,
...
}
...
return m
}
The above prototype is enough if the Clone()'s caller always know your object type when cloning. (and you need uppercase Clone() to make the method to be "public")
However, if you want advanced feature that a variable may hold any object of similar base interface, here is my sample:
func (t *T) Clone() YourBaseInterface
Where YourBaseInterface is:
type YourBaseInterface interface {
Clone() YourBaseInterface
OtherMethod1()
...
}
Or can merely use interface{} instead of YourBaseInterface in the return, and do a typecast like obj2 := obj.Clone().(*YourBaseType) after clone.
CAUTION
There is one drawback with this prototype. Becase Golang doesn't support this prototype as build-in, the Clone() method won't be called in some language's feature, e.g. when you copy(dest, src) a []YourTypeWithClone slice. Instead, it still do plain *elem2 = *elem1 struct copying. Solutions maybe either don't use those build-in, or you may flaw back to design the class struct members so that doing plain copy is enough for its copy purpose if possible.

Best practices to use own type in golang. Alians to build type

I wonder when or if it should be use own type in golang. When this will be more friendly to understand my code, or when I shouldn't use own type.
Example:
I want to create map type with MAC, and name host:
in first way the simplest I can do that
var machines map[string]string{
"11:22...": "myHost",
"22:33..": "yourHost",
}
in second way
type MAC string
type HOST string
machines := map[MAC]HOST{
MAC("11:22..") : HOST("myHost"),
MAC("22:33..") : HOST("yourHost"),
}
In above exmaple I can get additional controle on my type MAC, HOST trought write method to validation, compare etc it is better ?
Third way
type MACHINES map[string]string
m := MACHINES{}
m = map[string]string{
"11:22.." : "myHost",
"22:33" : "yourHost",
}
above example for me is worst to understand less intuitive to some else. I think that above example should be also filled about HOST, and MAC because type MACHINE nothing to say developer how this should be implement so I would like
type MACHINES map[MAC]HOST
However, please about comment to better understand about usage own type in golang.
Without commenting on your specific example, there are a few reasons you'd generally want to use a new type:
You need to define methods on the type
You don't want the type to be comparable with literals or variables with the type it's derived from (eg. to reduce user confusion or make sure they don't do something invalid like attempt to compare your special string with some other random string)
You just need a place to put documentation, or to group methods that return a specific type (eg. if you have several Dial methods that return a net.Conn, you might create a type Conn net.Conn and return that instead just for the sake of grouping the functions under the Conn type header in godoc or to provide general documentation for the net.Conn returned by the methods).
Because you want people to be able to check if something of a generic type came from your package or not (eg. even if your Conn is just a normal net.Conn, it gives you the option of type switching and checking if it's a yourpackage.Conn as well)
You want a function to take an argument from a predefined list of things and you don't want the user to be able to make new values that can be passed in (eg. a list of exported constants of an unexported type)
Creating type alias is only useful when you require to add extra methods (such as validation functions) or when you want to document the desired use of some value (for example, the net.IP type).
Type alias could help you to prevent API misunderstanding, but won't if you're using constant values. For example, this code is valid:
type Host string
type Mac string
hosts := map[Mac]Host{"ad:cb..": "localhost"}
For further information about how constants work in Go, you can check the Rob Pike's blog post
One of the most important features of Go is interfaces. In Go by defining the method/s of an Interface you implement the interface, and the only way to implement an interface is by adding a Method to your type. For instance, say you want to implement the Stringer interface from the fmt package.
In your example type MACHINES map[string]string you would add a method called String to your type.
func (m MACHINES) String() string {
s := "MACHINES\n"
for k, v := range m {
s += fmt.Sprintf("%v: %v\n", k, v)
}
return s
}
Any other function that accepts the Stringer interfaces can now also accept your MACHINES type since you implemented the Stringer interface.
For example the fmt.Printf checks if the passed in type implements the Stringer interface
m := MACHINES{"foo":"bar", "baz": "bust"}
fmt.Printf("%s", m)
Will invoke MACHINES String method
Example from the playground
You do want to use your own types in my opinion.
As an example, lists of function arguments that are all "string" or all "int" are confusing and very easy to get wrong.
And a comment on your type names. "MAC" is an acronym for Media Access Control so it should stay as all caps. But "HOST" should be "Host" since it is just a word. I don't recall where it is exactly, but there's a recommended form for Go names which is CamelCase with all-capital abbreviations, like "GetIPv4AddressFromDNS"
I think another reason to use your own types, that hasn't been mentioned here, is if you are not sure exactly what the right type for something is e.g. uint8 / uint16 and you want to easily change it later all in one place.
This will however make conversions necessary whenever you want to use the built-in types' methods. Or you will need to define them on your own type.

How can Go type ExitError in package os/exec support the Sys() method if it's not in the documentation?

Based on various examples on the web and in the answer to this SO question, I am trying to figure out how it is possible for type ExitError from package os/exec to support the Sys() method even if the docs only mention the Error() method for this type.
I've guessed that the Sys() method in question is from type ProcessState in package os, but how does ExitError get to use it directly (exiterror.Sys()) without having to use the full (exiterror.ProcessState.Sys())?
This must be a basic Go question but I've yet to figure out the answer to this one one my own...
cmd.Wait() already returns error of type *ExitError. If you look at ExitError's definition, you can see that it embeds *os.ProcessState:
type ExitError struct {
*os.ProcessState
// other fields
}
It is through *os.ProcessState that a value of type ExitError can call Sys() method.
Note that within the definition of ExitError, there is no field name associated with *os.ProcessState, which means that a value of type ExitError can directly call any method on *os.ProcessState (sort of like inheritance, where ExitError inherits *os.ProcessState. But this is only to give you a very basic idea. Read the docs for clarification.) as long as there is no method defined on ExitError with the same name.
There is of course more to it. You can read about it here.

Why do packages export a non-exported function by returning the non-exported function through an exported function

Why do some packages declare two equal functions the only difference is one is exported and the other is not but the one that is exported just returns the non-exported function like this:
func Foo() {
return foo()
}
func foo() {
log.Println("Hello")
}
Why not just move the log into the exported function and get rid of the extra line? Obviously there is a reason but I don't really see one if you can just use the exported one everywhere. Thanks!
Example here of it being used in Production
You mentioned a couple examples. The first example (https://github.com/yohcop/openid-go/blob/master/verify.go#L11-L13):
func Verify(uri string, cache DiscoveryCache, nonceStore NonceStore) (id string, err error) {
return verify(uri, cache, urlGetter, nonceStore)
}
You can see that the unexported verify function takes an extra urlGetter argument. This may be something that a client of this package cannot or should not provide. The exported function determines how clients of the package can/should use it; the signature of the non-exported function reflects the dependencies required to do whatever business logic verify is doing.
The second example(https://github.com/golang/oauth2/blob/master/oauth2.go#L259-L266):
func StaticTokenSource(t *Token) TokenSource {
return staticTokenSource{t}
}
// staticTokenSource is a TokenSource that always returns the same Token.
type staticTokenSource struct {
t *Token
}
This restricts how clients can construct the staticTokenSource: there is only one way to do it, via the StaticTokenSource constructor, and it cannot be done directly via a struct literal. This can be useful for many reasons, e.g. input validation. In this case, you want the safety of knowing that the client cannot mutate the t field on the object, and in order to do this, you leave the t field unexported. But when it's unexported, the client will not be able to construct the struct literal directly, so you must provide a constructor.
In general, it makes your code much easier to reason about when you can restrict how things are accessed, constructed, or mutated. Golang packages give you a nice mechanism to encapsulate modules of business logic. It's a good idea to think about the conceptual components of your software, and what their interfaces should be. What really needs to be exposed to client code consuming a given component? Only things that really need to be exported should be.
Further reading: Organizing Go code

Resources