`bufio.Writer` or `io.Writer`? - go

I have a function that writes data to anything that implements an interface with a Write(b []byte) (n int, err error) method. Right now in my program I write to an actual Conn, but following best practices (https://dave.cheney.net/2016/08/20/solid-go-design) and because I only call Write, I want to accept the minimum interface that implements that method. For that, I accept a parameter with interface io.Writer.
Since my function could be outputting lots of data very fast, should I be accepting a bufio.Writer instead? Or is it in the function’s consumer responsibility to use a buffered writer instead of a plain one? What are best practices?

Create your function to accept io.Writer, and document that it will write a lot of data, and so bufio.Writer or a similar construct is advised.
Do not limit users of your function to bufio.Writer, as you only use the functionality of an io.Writer. Also users may have other "buffered" implementation of io.Writer that will be sufficient for them.
Don't decide what's good for the users of your library, let them decide what's good for them. Should the users find bufio.Writer useful or better than their io.Writer implementation, they can always wrap it in a bufio.Writer and pass that (simply using bufio.NewWriter(w)).
If you create your function to accept io.Writer, users can always add the wrapping functionality very easily with a one-line utility function:
func wrapAndPass(w io.Writer) {
yourFunction(bufio.NewWriter(w))
}
If you create your function to accept bufio.Writer, then there is no way for the users to undo this "wrapping". Users will be forced to always create and pass a bufio.Writer, whether it's needed or not.
Also you may opt to provide two functions: your original function taking io.Writer, and the above wrapping-and-passing utility function. If you do so, it would also be nice to check if the passed writer is already a *bufio.Writer, in which case wrapping should be avoided. Something like this:
func wrapIfNeededAndPass(w io.Writer) {
if _, ok := w.(*bufio.Writer); !ok {
w = bufio.NewWriter(w)
}
yourFunction(w)
}
But usually this kind of wrapping is only applied if extra functionality is needed "beyond" io.Writer.

Related

Allow pass to func only pointer, not values

I want to allow pass to func only pointer to a struct, restrict values. Is it possible?
What I want to do:
Foo(&Bar{}) // allowed
Foo(Bar{}) // IDE/compilation error
Actually i'm using signature like func Foo(bar any) which allows pass to the function any interface and type, of course, so in some cases it can cause problems.
Amount of types, which can passed to this function is should be not limited, and I don't want to use specific interface etc. Maybe this can be achieved with generics? But I'm not sure how to do it correctly.
I'm use go 1.18.
Yes you can use generics like this:
func Foo[T any](t *T){
…
}

How to write append "method" in go

I would like to explore Go possibilities. How would you turn "append" to method? simply: "st.app(t) == append(st, t)"
This is what I got:
type T interface{}
type ST []interface{}
func (st []T) app (t T) []T {
return(append(st, t))
}
but this code does not check for type compatibility:
append([]int{1,2},"a") // correctly gives error
ST{1,2}.app("a") // dumbly gives [1 2 a] !!!
I know why the code does not check for type compatibility but What is the right way to do it? Is it possible?
I appreciate your help to understand how Go works.
The right way to do this is to call append() as the built-in function it is. It's not an accident that it's a built-in function; you can't write append itself in Go, and anything that approximates it would be unsafe (and over-complicated).
Don't fight Go. Just write the line of code. In the time you spend trying to create tricky generics, you could have solved three more issues and shipped the product. That's how Go works.
This isn't to say Go will never have generics, but as we move towards Go 2 consider the following from Russ Cox:
For example, I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve. As a result, I can't answer a design question like whether to support generic methods, which is to say methods that are parameterized separately from the receiver. If we had a large set of real-world use cases, we could begin to answer a question like this by examining the significant ones.
(On the other hand, Generics is the most responded-to topic in ExperienceReports, so it's not that no one is aware of the interest. But don't fight Go. It's a fantastic language for shipping great software. It is not, however, a generic language.)
4 years later, there is actually a possible solution to get methods that are parameterized separately from the receiver.
There is with Go 1.18 generics
It does not support parameterized method
It does support parameterized method, meaning the receiver may have type parameters
Jaana B. Dogan proposes "Generics facilitators in Go" based on that
Parameterized receivers are a useful tool and helped me develop a common pattern, facilitators, to overcome the shortcomings of having no parameterized methods.
Facilitators are simply a new type that has access to the type you wished you had generic methods on.
Her example:
package database
type Client struct{ ... }
type Querier[T any] struct {
client *Client
}
func NewQuerier[T any](c *Client) *Querier[T] {
return &Querier[T]{
client: c,
}
}
func (q *Querier[T]) All(ctx context.Context) ([]T, error) {
// implementation
}
func (q *Querier[T]) Filter(ctx context.Context, filter ...Filter) ([]T, error) {
// implementation
You can use it as:
querier := database.NewQuerier[Person](client)
In your case though, while you could define an Appender struct, it still needs to apply on a concrete struct, which is why using the built-in append() remains preferable.
But in general, should you need parameterized methods, Jaana's pattern can help.

What is the preferred way to implement testing mocks in Go?

I am building a simple CLI tool in in Go that acts as a wrapper for various password stores (Chef Vault, Ansible Vault, Hashicorp Vault, etc). This is partially as an exercise to get familiar with Go.
In working on this, I came across a situation where I was writing tests and found I needed to create interfaces for many things, just to have the ability to mock dependencies. As such, a fairly simple implementation seems to have a bunch of abstraction, for the sake of the tests.
However, I was recently reading The Go Programming Language and found an example where they mocked their dependencies in the following way.
func Parse() map[string]string {
s := openStore()
// Do something with s to parse into a map…
return s.contents
}
var storeFunc = func openStore() *Store {
// concrete implementation for opening store
}
// and in the testing file…
func TestParse(t *testing.T) {
openStore := func() {
// set contents of mock…
}
parse()
// etc...
}
So for the sake of testing, we store this concrete implementation in a variable, and then we can essentially re-declare the variable in the tests and have it return what we need.
Otherwise, I would have created an interface for this (despite currently only having one implementation) and inject that into the Parse method. This way, we could mock it for the test.
So my question is: What are the advantages and disadvantages of each approach? When is it more appropriate to create an interface for the purposes of a mock, versus storing the concrete function in a variable for re-declaration in the test?
For testing purposes, I tend to use the mocking approach you described instead of creating new interfaces. One of the reasons being, AFAIK, there are no direct ways to identify which structs implement an interface, which is important to me if I wanted to know whether the mocks are doing the right thing.
The main drawback of this approach is that the variable is essentially a package-level global variable (even though it's unexported). So all the drawbacks with declaring global variables apply.
In your tests, you will definitely want to use defer to re-assign storeFunc back to its original concrete implementation once the tests completed.
var storeFunc = func *Store {
// concrete implementation for opening store
}
// and in the testing file…
func TestParse(t *testing.T) {
storeFuncOriginal := storeFunc
defer func() {
storeFunc = storeFuncOriginal
}()
storeFunc := func() {
// set contents of mock…
}
parse()
// etc...
}
By the way, var storeFunc = func openStore() *Store won't compile.
There is no "right way" of answering this.
Having said this, I find the interface approach more general and more clear than defining a function variable and setting it for the test.
Here are some comments on why:
The function variable approach does not scale well if there are several functions you need to mock (in your example it is just one function).
The interface makes more clear which is the behaviour being injected to the function/module as opposed to the function variable which ends up hidden in the implementation.
The interface allows you to inject a type with a state (a struct) which might be useful for configuring the behaviour of the mock.
You can of course rely on the "function variable" approach for simple cases and use the "interface" for more complex functionality, but if you want to be consistent and use just one approach I'd go with the "interface".
I tackle the problem differently. Given
function Parse(s Store) map[string] string{
// Do stuff on the interface Store
}
you have several advantages:
You can use a mock or a stub Store as you see fit.
Imho, the code becomes more transparent. The signature alone makes clear that a Store implementation is required. And the code does not need to be polluted with error handling for opening the Store.
The code documentation can be kept more concise.
However, this makes something pretty obvious: Parse is a function which can be attached to a store, which most likely makes more sense than to parse the store around.

How do I properly use these variables in a separate function in Go?

Apologies for what is likely a very elementary question.
I'm using http's listenAndServe, and it calls the following function:
func library(writer http.ResponseWriter, request *http.Request)
A lot of the code contained in that function applies elsewhere, so I wanted to bring it out into another function, such as:
func commonFunction(doThing bool, writer http.ResponseWriter, request *http.Request)
But is that function header for commonFunction correct if I'm passing those two variables from library into it?
Would I call it as commonFunction(true, writer, request)?
I'm mostly confused if I should be passing pointers to these variables? It would make sense not to for http.Request as it's already a pointer, but what about http.ResponseWriter, surely I don't want to recreate the variable?
Your signature looks fine. One part many people overlook when they first start doing web work in Go is that the writer http.ResponseWriter is an interface value. In Go interface values are reference types meaning that the writer variable you're being passed already internally contains a pointer to the concrete value that's satisfying that interface. You can feel free to pass your writer on to commonFunction and it's already a reference.

Should I avoid package singletons in golang?

at the moment I have a package store with following content:
package store
var (
db *Database
)
func Open(url string) error {
// open db connection
}
func FindAll(model interface{}) error {
// return all entries
}
func Close() {
// close db connection
}
This allows me to use store.FindAll from other packages after I have done store.Open in main.go.
However as I saw so far most packages prefer to provide a struct you need to initialize yourself. There are only few cases where this global approach is used.
What are downsides of this approach and should I avoid it?
You can't instantiate connections to 2 storages at once.
You can't easily mock out storage in unit tests of dependent code using convenient tools like gomock.
The standard http package has a ServerMux for generic usecases and but also has one default instance of ServerMux called DefaultServerMux (http://golang.org/pkg/net/http/#pkg-variables) for convenience. So that when you call http.HandleFunc it creates the handler on the default mux. You can find the same approach used in log and many other packages. This is essentially your "singleton" approach.
However, I don't think it's a good idea to follow that pattern in your use case, since the users need to call Open regardless of the default database. And, because of that, using a default instance would not really help and instead would actually make it less convenient:
d := store.Open(...)
defer d.Close()
d.FindAll(...)
is much easier to both write and read than:
store.Open(...)
defer store.Close()
store.FindAll(...)
And, also there are semantic problems: what should happen if someone calls Open twice:
store.Open(...)
defer store.Close()
...
store.Open(...)
store.FindAll(...) // Which db is this referring to?

Resources