I know that golang allows multiple init in one package and even in one file.
I am wondering why?
For example, if a pkg has many files, we could write multiple init then we could get lost in where to we should put init, and we could be also confused about the init order if we have multiple init in one pkg. (I mean is this better? we can only have 1 init, then we can have some initXXX, then put them into init, it seems quite clean.)
What's the advantage of doing this in code struct view?
This question may be somewhat opinion based, but using multiple package init() functions can make your code easier to read and maintain.
If your source files are large, usually you arrange their content (e.g. types, variable declarations, methods etc.) in some logical order. Allowance of multiple init() functions gives you the possibility to put initialization code near to the parts they ought to initialize. If this would not be allowed, you would be forced to use a single init() function per package, and put everything in it, far from the variables they need to initialize.
Yes, having multiple init() functions may require some care regarding the execution order, but know that using multiple init() functions is not a requirement, it's just a possibility. And you can write init() functions to not have "side" effects, to not rely on the completion of other init() functions.
If that is unavoidable, you can create one "master" init() which explicitly controls the order of other, "child" init() functions.
An example of a "master" init() controlling other initialization functions:
func init() {
initA()
initB()
}
func initA() {}
func initB() {}
In the above example, initA() will always run before initB().
Relevant section from spec: Package initialization.
Also see related question: What does lexical file name order mean?
Another use case for multiple init() functions is adding functionality based on build tags. The init() function can be used to add hooks into the existing package and extend its functionality.
The following is a condensed example demonstrating the addition of more commands to a CLI utility based on build tags.
package main
import "github.com/spf13/cobra"
var rootCmd = &cobra.Command{Use: "foo", Short: "foo"}
func init() {
rootCmd.AddCommand(
&cobra.Command{Use: "CMD1", Short: "Command1"},
&cobra.Command{Use: "CMD2", Short: "Command2"},
)
}
func main() {
rootCmd.Execute()
}
The above is the "vanilla" version of the utility.
// +build debugcommands
package main
import "github.com/spf13/cobra"
func init() {
rootCmd.AddCommand(&cobra.Command{Use: "DEBUG-CMD1", Short: "Debug command1"})
}
The contents of the second file extends the standard command with additional commands that are mostly relevant during development.
Compiling using go build -tags debugcommands will produce a binary with the added commands, while omitting the -tags flag will produce a standard version.
Related
I'm trying to create a template method that should be executed a certain other method is called.
For example:
func main(){
onInit()
}
func onInit(){
var Instance entity.EntityInstance
Instance.Init()
//do something
}
Another source file, instance.go
type EntityInstance struct{
name string
version string
}
func (instance *EntityInstance) Init(){
// do some initialization
}
The main method is in different code base/app and uses the Instance app to invoke certain initializations.
Currently the user writing this above main method needs to explicitly call the Instance.init()
The objective is for the developers (in this case one who implements the main method) only concern themselves with any of their custom initializations and not worry about calling "Instance.Init()". The OnInit() invoke should take care of "Instance.Init()" implicitly.
Any help to get me started in the right direction ?
EDIT: I do understand that the exact OOP concepts cannot be translated here in Golang but all I'm looking for is the appropriate approach. Clearly, I need to change the way I think of design in here but just don't know how.
Your question is a little unclear, I suspect because you are trying to directly translate ideas and idioms from another language, you should resist doing that. However, if you want an implicit init for a package in Go, you can use the magic function name
func init(){}
https://golang.org/doc/effective_go.html#init
Finally, each source file can define its own niladic init function to
set up whatever state is required. (Actually each file can have
multiple init functions.) And finally means finally: init is called
after all the variable declarations in the package have evaluated
their initializers, and those are evaluated only after all the
imported packages have been initialized.
Be careful with this though, it is implicit behaviour and could cause mysterious bugs if your callers don't know it is happening when they import your package.
Let's say I have some package
// ./somepkg/someFile.go
package somepkg
import "fmt"
func AnExportedFunc(someArg string) {
fmt.Println("Hello world!")
fmt.Println(someArg)
)
and I import it from my main go file
// ./main.go
package main
import (
"./somepkg" // Let's just pretend I have the full path written out
"fmt"
)
func main() {
fmt.Println("I want to get a list of exported funcs from package 'somefolder'")
}
Is there a way to get access to the exported functions from package 'somepkg' and then to consequently call them? Argument numbers/types would be consistent across all functions in somepkg. I've looked through the reflection package but I'm not sure if I can get the list and call the functions without knowing any information other than package name. I may be missing something from the godocs however, so any advice is appreciated. What I'm trying to do is essentially have a system where people can drop in .go files as a sort of "plugin". These "plugins" will have a single exported function which the main program itself will call with a consistent number and types of args. Access to this codebase is restricted so there are no security concerns with arbitrary code execution by contributors.
Note: This is all compiled so there are no runtime restrictions
What I'm trying to do is something like this if written in python
# test.py
def abc():
print "I'm abc"
def cba():
print "I'm cba"
and
# foo.py
import test
flist = filter(lambda fname: fname[0] != "_", dir(test))
# Let's forget for a moment how eval() is terrible
for fname in flist:
eval("test."+fname+"()")
running foo.py returns
I'm abc
I'm cba
Is this possible in golang?
Edit:
I should note that I have already "accomplished" this with something very similar to http://mikespook.com/2012/07/function-call-by-name-in-golang/ but require that each additional "plugin" add its exported function to a package global map. While this "works", this feels hacky (as if this whole program isn't... lol;) and would prefer if I could do it without requiring any additional work from the plugin writers. Basically I want to make it as "drop and go" as possible.
As you might've guessed, it is difficult to achieve in Go, if not impossible, as Go is a compiled-language.
Traversing a package for exported functions can only get you the list of functions. A sample for this is: Playground. This is a AST (Abstract Syntax Tree) method which means calling the functions dynamically is not possible, or requires too much of workarounds. Parsing function name string as function type doesn't work here.
Alternatively, you can use try methods by binding the exported functions to some type.
type Task struct {}
func (Task) Process0()
func (Task) Process1(v int)
func (Task) Process2(v float64)
func (Task) Process3(v1 bool, v2 string)
This totally changes the way we operate as the 4 methods are now associated with a type Task and we can pass empty instance of Task to call the methods defined on it. This might look like just another workaround, but is very common in languages like Go. A sample for this in Playground which actually works as expected.
In both the examples, I've used multiple files in playground. If you are not familiar with this structure, just create your workspace in your local as following and copy the code under each file name from playground:
<Your project>
├── go.mod
├── main.go
└── task
└── task.go
References:
How to dynamically call all methods of a struct in Golang? [duplicate]
How to inspect function arguments and types [duplicate]
How do I list the public methods of a package in golang [duplicate]
Abstract Syntax Tree - Wiki
Functions vs Methods in Go
The easiest way to do this is to use the template library to parse your code and insert the new package name where appropriate.
Playground
You can use this by loading all of the finals where you call the user package and then output the generated file to the execution directory.
Is there a way to shadow a function at global scope in a golang package? In some go file, I DONT want users to be able to call BFunc... That is, I want to wrap it...
// Lets say this package provides BFunc()
// And I have a naughty user who wants to import it
. "github.com/a/bfunc"
So, In another go file at global scope, I might do:
func BFunc() { fmt.Print("haha I tricked you") }
When I try this, I get an error that there is a previous declaration of the same function, referring specifically to the . import.
Would there be a syntactical hack I can do to prevent users from globally importing the bfunc.BFunc() method into their code?
UPDATE
This can be described using a simpler snippet.
package main
import . "fmt"
func Print(t string) {
Print("ASDF")
}
func main() {
Print("ASDF")
}
Which doesn't work, because Print is redeclared. If there is a way to hack around this so that Print can be redeclared, then that would answer my original question effectively.
If you don't want users of a library to use a function, then don't export that function.
Shadowing identifiers defined in another package is impossible. Shadowing named functions is impossible even in the same package.
In Go, you can define multiple init functions in a given package, all of which will be run prior to execution in unspecified order. One consequence of having multiples of such functions is that it's impossible to call or identify them in normal code. For example, the following will not compile:
func main() {
fmt.Println(init)
}
func init() { }
(see here for a Go playground example)
My question is - what advantage does being able to have multiple init functions give, and if there weren't multiple init functions, would we be able to reference or call init functions?
Advantage of being able to have multiple init functions is IMO mainly that it improves readability by locality: You can write the initialization function next to the stuff being initialized and not remotely if you would have to centralize all the init functions to one. Which, BTW, could be then even in a different source file.
Taking a function pointer of the hypothetical per-package-single init function would be probably prohibited as well. The reason is that having such pointer would allow, in some cases, to call the init function "out of order", ie. before running its dependencies - other init functions in other packages. That would break certain guarantees.
When I'm writing an interface, its often convenient to define my tests in the same package as the interface, and then define multiple packages that implement the interface set, eg.
package/
package/impl/x <-- Implementation X
package/impl/y <-- Implementation Y
Is there an easy way to run the same test suite (in this case, located in package/*_test.go) in the sub packages?
The best solution I've come up with so far is to add a test package:
package/tests/
Which implements the test suite, and a test in each of the implementations to run the tests, but this has two downsides:
1) The tests in package/tests are not in _test.go files, and end up being part of the actual library, documented by godoc, etc.
2) The tests in package/tests are run by a custom test runner, which has to basically duplicate all the functionality of go test to scan for go tests and run them.
Seems like a pretty tacky solution.
Is there is a better way of doing this?
I don't really dislike the idea to use a separate testing library. If you have an interface and you have generic tests for each interface, other people that implement that interface might like to use these tests as well.
You could create a package "package/test" that contains a function
// functions needed for each implementation to test it
type Tester struct {
func New() package.Interface
func (*package.Interface) Done()
// whatever you need. Leave nil if function does not apply
}
func TestInterface(t *testing.T, tester Tester)
Notice that the signature of TestInterface does not match to what go test expects. Now, for each package package/impl/x you add one file generic_test.go:
package x
import "testing"
import "package/test"
// run generic tests on this particular implementation
func TestInterface(t *testing.T) {
test.TestInterface(t,test.Tester{New:New})
}
Where New() is the constructor function of your implementation. The advantage with this scheme is that
Your tests are reusable for whoever implements your interface, even from other packages
It is immediately obvious that you run the generic test suite
The test cases are where the implementation is and not at another, obscure place
The code can be adapted easily if one implementation needs special initialization or similar stuff
It's go test compatible (big plus!)
Of course, in some cases you need a more complicated TestInterface function, but this is the basic idea.
If you share a piece of code for reuse by different packages then yes, it is a library by definition. Even when used only for testing from *_test.go files. It's no different from importing "testing" of "fmt" in the _test.go file. And having the API documented by godoc is a plus, not minus IMHO.
Maybe something gets mixed up here a bit:
If package a defines an interface only than there is no code to
test as interfaces in Go are implementation free.
So I assume the methods in your interface in package a
have constraints. E.g. in
interface Walker {
Walk(step int)
Tired() bool
}
you contract assumes that Tired returns true if more than
500 steps have been Walk'ed (and false otherwise)
and your test code checks these dependencies
(or assumption, contracts, invariants whatever you
name it).
If this is the case I would provide (in package a) an exported
function
func TestWalkerContract(w Walker) error {
w.Walk(100)
if w.Tired() { return errors.New("Tired after 100 steps") }
w.Walk(450)
if !w.Tired() { return errors.New("Not tired after 100+450 steps") }
}
Which documents the contract properly and can be used by packages
b and c with types implementing walker to test their implementations
in b_test.go and c_test.go. IMHO it is perfectly okay that these
function like TestWalkerContract are displayed by godoc.
P.S. More common than Walk and Tired might be an error state
which is kept and reported until cleared/reseted.