Passing Variables to a Golang Package - go

Given a package containing the following local variable and functions:
var bucket *gocb.Bucket
func Init(b *gocb.Bucket) {
bucket = b
}
func DoSomething() {
// do something with 'bucket'
}
Is it acceptable to call the Init function, passing in an instance of bucket, before calling DoSomething, which is dependent on the bucket variable?
Or, should DoSomething instead explicitly accept a bucket parameter, as follows:
func DoSomething(bucket *gocb.Bucket) {
// do something with 'bucket'
}
I would prefer to instantiate a single instance of bucket at a package-level, and use it throughout the application life-cycle, as opposed to managing at a function-level. Is this acceptable from design, performance, etc., perspectives, or is there a preferred means of achieving this? Bearing in mind that bucket need only be instantiated once.
DoSomething will be called from a HTTP context; I would prefer that the HTTP handlers not have visibility on the bucket parameter, and instead instantiate bucket on application start-up.

In Go, if your package depends on something external you import said thing. So, unless it's impossible for some reason, you should import the package that instantiates bucket and take it from there, either directly assigning it or in your package's init function.
import "my/other/pkg"
var bucket = pkg.InitBucket()
However, if it's impossible to determine what package will provision bucket, then your way is the way to Go. As an example, consider database/sql where SQL drivers have to be registered before they can be used.
In general, IoC does not apply to Go packages if that's what you had in mind.

Related

Handling package and function names dynamically

I'm on my first Golang project, which consists of a small router for an MVC structure. Basically, what I hope it does is take the request URL, separate it into chunks, and forward the execution flow to the package and function specified in those chunks, providing some kind of fallback when there is no match for these values in the application .
An alternative would be to map all packages and functions into variables, then look for a match in the contents of those variables, but this is not a dynamic solution.
The alternative I have used in other languages (PHP and JS) is to reference those names sintatically, in which the language somehow considers the value of the variable instead of considering its literal. Something like: {packageName}.{functionName}() . The problem is that I still haven't found any syntactical way to do this in Golang.
Any suggestions?
func ParseUrl(request *http.Request) {
//out of this URL http://www.example.com/controller/method/key=2&anotherKey=3
var requestedFullURI = request.URL.RequestURI() // returns '/controller/method?key=2key=2&anotherKey=3'
controlFlowString, _ := url.Parse(requestedFullURI)
substrings := strings.Split(controlFlowString.Path, "/") // returns ["controller","method"]
if len(substrings[1]) > 0 {
// Here we'll check if substrings[1] mathes an existing package(controller) name
// and provide some error return in case it does not
if len(substrings[2]) > 0 {
// check if substrings[2] mathes an existing function name(method) inside
// the requested package and run it, passing on the control flow
}else{
// there's no requested method, we'll just run some fallback
}
} else {
err := errors.New("You have not determined a valid controller.")
fmt.Println(err)
}
}
You can still solve this in half dynamic manner. Define your handlers as methods of empty struct and register just that struct. This will greatly reduce amount of registrations you have to do and your code will be more explicit and readable. For example:
handler.register(MyStruct{}) // the implementation is for another question
Following code shows all that's needed to make all methods of MyStruct accessible by name. Now with some effort and help of reflect package you can support the routing like MyStruct/SomeMethod. You can even define struct with some fields witch can serve as branches so even MaStruct/NestedStruct/SomeMethod is possible to do.
dont do this please
Your idea may sound like a good one but believe me its not. Its lot better to use framework like go-chi that is more flexible and readable then doing some reflect madness that no one will understand. Not to mention that traversing type trees in go was newer the fast task. Your routes should not be defined by names of structures in your backend. When you commit into this you will end up with strangely named routes that use PascalCase instead of something-like-this.
What you're describing is very typical of PHP and JavaScript, and completely inappropriate to Go. PHP and JavaScript are dynamic, interpreted languages. Go is a static, compiled language. Rather than trying to apply idioms which do not fit, I'd recommend looking for ways to achieve the same goals using implementations more suitable to the language at hand.
In this case, I think the closest you get to what you're describing while still maintaining reasonable code would be to use a handler registry as you described, but register to it automatically in package init() functions. Each init function will be called once, at startup, giving the package an opportunity to initialize variables and register things like handlers and drivers. When you see things like database driver packages that need to be imported even though they're not referenced, init functions are why: importing the package gives it the chance to register the driver. The expvar package even does this to register an HTTP handler.
You can do the same thing with your handlers, giving each package an init function that registers the handler(s) for that package along with their routes. While this isn't "dynamic", being dynamic has zero value here - the code can't change after it's compiled, which means that all you get from being dynamic is slower execution. If the "dynamic" routes change, you'd have to recompile and restart anyway.

Inheritance in golang

I'm trying to create a template method that should be executed a certain other method is called.
For example:
func main(){
onInit()
}
func onInit(){
var Instance entity.EntityInstance
Instance.Init()
//do something
}
Another source file, instance.go
type EntityInstance struct{
name string
version string
}
func (instance *EntityInstance) Init(){
// do some initialization
}
The main method is in different code base/app and uses the Instance app to invoke certain initializations.
Currently the user writing this above main method needs to explicitly call the Instance.init()
The objective is for the developers (in this case one who implements the main method) only concern themselves with any of their custom initializations and not worry about calling "Instance.Init()". The OnInit() invoke should take care of "Instance.Init()" implicitly.
Any help to get me started in the right direction ?
EDIT: I do understand that the exact OOP concepts cannot be translated here in Golang but all I'm looking for is the appropriate approach. Clearly, I need to change the way I think of design in here but just don't know how.
Your question is a little unclear, I suspect because you are trying to directly translate ideas and idioms from another language, you should resist doing that. However, if you want an implicit init for a package in Go, you can use the magic function name
func init(){}
https://golang.org/doc/effective_go.html#init
Finally, each source file can define its own niladic init function to
set up whatever state is required. (Actually each file can have
multiple init functions.) And finally means finally: init is called
after all the variable declarations in the package have evaluated
their initializers, and those are evaluated only after all the
imported packages have been initialized.
Be careful with this though, it is implicit behaviour and could cause mysterious bugs if your callers don't know it is happening when they import your package.

Why do packages export a non-exported function by returning the non-exported function through an exported function

Why do some packages declare two equal functions the only difference is one is exported and the other is not but the one that is exported just returns the non-exported function like this:
func Foo() {
return foo()
}
func foo() {
log.Println("Hello")
}
Why not just move the log into the exported function and get rid of the extra line? Obviously there is a reason but I don't really see one if you can just use the exported one everywhere. Thanks!
Example here of it being used in Production
You mentioned a couple examples. The first example (https://github.com/yohcop/openid-go/blob/master/verify.go#L11-L13):
func Verify(uri string, cache DiscoveryCache, nonceStore NonceStore) (id string, err error) {
return verify(uri, cache, urlGetter, nonceStore)
}
You can see that the unexported verify function takes an extra urlGetter argument. This may be something that a client of this package cannot or should not provide. The exported function determines how clients of the package can/should use it; the signature of the non-exported function reflects the dependencies required to do whatever business logic verify is doing.
The second example(https://github.com/golang/oauth2/blob/master/oauth2.go#L259-L266):
func StaticTokenSource(t *Token) TokenSource {
return staticTokenSource{t}
}
// staticTokenSource is a TokenSource that always returns the same Token.
type staticTokenSource struct {
t *Token
}
This restricts how clients can construct the staticTokenSource: there is only one way to do it, via the StaticTokenSource constructor, and it cannot be done directly via a struct literal. This can be useful for many reasons, e.g. input validation. In this case, you want the safety of knowing that the client cannot mutate the t field on the object, and in order to do this, you leave the t field unexported. But when it's unexported, the client will not be able to construct the struct literal directly, so you must provide a constructor.
In general, it makes your code much easier to reason about when you can restrict how things are accessed, constructed, or mutated. Golang packages give you a nice mechanism to encapsulate modules of business logic. It's a good idea to think about the conceptual components of your software, and what their interfaces should be. What really needs to be exposed to client code consuming a given component? Only things that really need to be exported should be.
Further reading: Organizing Go code

How to define a struct globally and reuse it packages

Im very new to Go and have this "design" problem.
I have a main program passing jobs through channels. Each job will end up in a function defined in separate "worker" packages. Jobs are structs.
Now i want each function called, to return result as a common struct through a "result" channel. But the package doesnt know about the struct definition i have in main and so i cannot define it.
package main
type resultEvent struct {
name string
desc string
}
Then in a worker package:
package worker
func Test() {
result := &resultEvent{name: "test"}
}
Of course the idea is to eventually send this result down a channel, but even this simple example wont work, because worker doesnt know about resultEvent.
What would be the correct way of doing this?
Update:
It should be noted that there will be many worker packages, doing different things. Sorta like "plugins" (only not pluggable at all).
I dont want to define a redundant struct in each go-file and then have to maintain that over maybe 50 very different worker-packages.
Im looking for what would be the correct way to structure this, so i can reuse one struct for all worker-packages.
Basically, anything that lives in package main will only ever be able to be referenced from that pacakge. If you want it to be shared between multiple packages, put it in the worker package and export it (Upper case the first letter), then import worker from main.
No matter what, you will have to import the package which contains the type you'd like to use. However, the reason this isn't working for you is because your type is not exported. You need to uppercase the types name like;
type ResultEvent struct {
name string
desc string
}
Worth checking out what exported vs unexported means but basically upper case means exported which is similar to the public specifier in other systems languages. Lower case means unexported which is more like internal or private.
As pointed out in the comment and other answer you can't import main so I believe you'll have to move your types definition as well.
One possible way would be something like:
package workerlib
type ResultEvent struct {
Name string // Export the struct fields, unless you have a
Description string // real good reason not to.
}
Then stick the rest of the worker utility functions in that package. Unless you provide suitable methods to read the name and description from an event, simply export the fields. If you have an absolute need to make them changeable only from within the package they're defined in, you could keep them unexported, then provide a function to create a ResultEvent as well as methods to read the name and description.

Should I avoid package singletons in golang?

at the moment I have a package store with following content:
package store
var (
db *Database
)
func Open(url string) error {
// open db connection
}
func FindAll(model interface{}) error {
// return all entries
}
func Close() {
// close db connection
}
This allows me to use store.FindAll from other packages after I have done store.Open in main.go.
However as I saw so far most packages prefer to provide a struct you need to initialize yourself. There are only few cases where this global approach is used.
What are downsides of this approach and should I avoid it?
You can't instantiate connections to 2 storages at once.
You can't easily mock out storage in unit tests of dependent code using convenient tools like gomock.
The standard http package has a ServerMux for generic usecases and but also has one default instance of ServerMux called DefaultServerMux (http://golang.org/pkg/net/http/#pkg-variables) for convenience. So that when you call http.HandleFunc it creates the handler on the default mux. You can find the same approach used in log and many other packages. This is essentially your "singleton" approach.
However, I don't think it's a good idea to follow that pattern in your use case, since the users need to call Open regardless of the default database. And, because of that, using a default instance would not really help and instead would actually make it less convenient:
d := store.Open(...)
defer d.Close()
d.FindAll(...)
is much easier to both write and read than:
store.Open(...)
defer store.Close()
store.FindAll(...)
And, also there are semantic problems: what should happen if someone calls Open twice:
store.Open(...)
defer store.Close()
...
store.Open(...)
store.FindAll(...) // Which db is this referring to?

Resources