Why return a func in Golang - go

I was looking into golang contexts recently, and found that the WithCancel() is implemented in an interesting way.
func WithCancel(parent Context) (ctx Context, cancel CancelFunc) {
if parent == nil {
panic("cannot create context from nil parent")
}
c := newCancelCtx(parent)
propagateCancel(parent, &c)
return &c, func() { c.cancel(true, Canceled) }
}
WithCancel() returns a ctx, and also a func to cancel the very same context. Why is this done rather than introducing the .Cancel() as a func of the type itself, like
func (c *cancelCtx) Cancel() {
c.cancel(true, Canceled)
}
I understand using a func return type allows you to run a different func depending on runtime conditions, but there's no dynamic here - it's always the same func. Is this just because for the functional paradigm?
Reference: https://cs.opensource.google/go/go/+/master:src/context/context.go;l=232-239?q=context&ss=go%2Fgo

Not all contexts are cancel-able. You could argue that for those that aren't, the Cancel() method could be a no-op.
But then you would always have to call Cancel() whenever you work with context.Context because you don't (can't) know whether it truly needs cancelling. This would be unnecessary in a lot of cases, would make code slower (cancel functions are usually called deferred) and would bloat the code.
Also, the power of cancelling a context is for its creator only. The creator may choose to share this responsibility by passing / sharing the cancel function, but if doesn't, sharing the context alone does not allow (should not allow) cancelling it. If Cancel() would be part of context.Context, this restriction could not be enforced. For details, see Cancel context from child.
Interfaces–especially those widely used–should be small and a minimum, not containing all, rarely useful things.

Related

config.Check does not recognize context.Context as context.Context interface

I am trying to execute types.Config.Check on github.com/OrlovEvgeny/go-mcache/mcache.go in order to extract the definitions and types in the Go file.
The problem is that the Check() method fails with:
cannot use ctx (variable of type context.Context) as context.Context value in argument to gcmap.NewGC: context.Context does not implement context.Context (wrong type for method Deadline)
have Deadline() (deadline time.Time, ok bool)
want Deadline() (deadline time. Time, ok bool)
The Context used in mcache library is quite standard, and is passed to NewGC():
func (mc *CacheDriver) initStore() (context.Context, context.CancelFunc) {
ctx, finish := context.WithCancel(context. Background())
mc.storage = safeMap.NewStorage()
mc.gc = gcmap.NewGC(ctx, mc.storage) // <-- The problem is here
return ctx, finish
}
I should also point out the library compiles successfully and works.
I have debugged the issue it ultimately the following line returns false (types/predicates. go:416):
return x.obj == y.obj
I have compared the two object, but they seem similar besides their token.Pos, which I guess it is the reason why it returns false.
Does anyone has any idea what am I missing here?
Thanks!

bounded 'mutex pools' for synchronized entity behaviour

I have a function
type Command struct {
id Uuid
}
handleCommand(cmd Command)
{
entity := lookupEntityInDataBase(cmd.Uuid)
entity.handleCommand(cmd)
saveEntityInDatabase(entity)
}
However this function can be called in parallel, and the entities are assumed to be non-threadsafe leading to racyness in the entity's state and the state which will be saved in the database.
A simple mutex locking at the beginning and end in this function would solve this, but would result in overly pessimistic synchronisation, since entities of different instances (i.e. different uuid), should be allowed to handle their commands in parallel.
An alternative can be to keep a Map of map[uuid]sync.Mutex, and create a new mutex if a uuid is not encountered before, and create it in a threadsafe manner. However, this will result in a possibly endlessly growing map of all the uuids every encountered at runtime.
I thought of cleaning up the mutexes afterwards, but doing this threadsafe, and realising that another thread may already be waiting for the mutex opens so many cans of worms.
I hope I am missing a very simple and elegant solution.
There isn't really an elegant solution. Here's a version using channels:
var m map[string]chan struct{}
var l sync.Mutex
func handleCommand(cmd Command) {
for {
l.Lock()
ch, ok:=m[cmd.Uuid]
if !ok {
ch=make(chan struct{})
m[uuid]=ch
defer func() {
l.Lock()
delete(m,cmd.Uuid)
close(ch)
l.Unlock()
}()
l.Unlock()
break
}
l.Unlock()
<-ch
}
entity := lookupEntityInDataBase(cmd.Uuid)
entity.handleCommand(cmd)
saveEntityInDatabase(entity)
}
The Moby project has such a thing as a library, see https://github.com/moby/locker
The one-line description is,
locker provides a mechanism for creating finer-grained locking to help free up more global locks to handle other tasks.

Attempting to acquire a lock with a deadline in golang?

How can one only attempt to acquire a mutex-like lock in go, either aborting immediately (like TryLock does in other implementations) or by observing some form of deadline (basically LockBefore)?
I can think of 2 situations right now where this would be greatly helpful and where I'm looking for some sort of solution. The first one is: a CPU-heavy service which receives latency sensitive requests (e.g. a web service). In this case you would want to do something like the RPCService example below. It is possible to implement it as a worker queue (with channels and stuff), but in that case it becomes more difficult to gauge and utilize all available CPU. It is also possible to just accept that by the time you acquire the lock your code may already be over deadline, but that is not ideal as it wastes some amount of resources and means we can't do things like a "degraded ad-hoc response".
/* Example 1: LockBefore() for latency sensitive code. */
func (s *RPCService) DoTheThing(ctx context.Context, ...) ... {
if s.someObj[req.Parameter].mtx.LockBefore(ctx.Deadline()) {
defer s.someObj[req.Parameter].mtx.Unlock()
... expensive computation based on internal state ...
} else {
return s.cheapCachedResponse[req.Parameter]
}
}
Another case is when you have a bunch of objects which should be touched, but which may be locked, and where touching them should complete within a certain amount of time (e.g. updating some stats). In this case you could also either use LockBefore() or some form of TryLock(), see the Stats example below.
/* Example 2: TryLock() for updating stats. */
func (s *StatsObject) updateObjStats(key, value interface{}) {
if s.someObj[key].TryLock() {
defer s.someObj[key].Unlock()
... update stats ...
... fill in s.cheapCachedResponse ...
}
}
func (s *StatsObject) UpdateStats() {
s.someObj.Range(s.updateObjStats)
}
For ease of use, let's assume that in the above case we're talking about the same s.someObj. Any object may be blocked by DoTheThing() operations for a long time, which means we would want to skip it in updateObjStats. Also, we would want to make sure that we return the cheap response in DoTheThing() in case we can't acquire a lock in time.
Unfortunately, sync.Mutex only and exclusively has the functions Lock() and Unlock(). There is no way to potentially acquire a lock. Is there some easy way to do this instead? Am I approaching this class of problems from an entirely wrong angle, and is there a different, more "go"ish way to solve them? Or will I have to implement my own Mutex library if I want to solve these? I am aware of issue 6123 which seems to suggest that there is no such thing and that the way I'm approaching these problems is entirely un-go-ish.
Use a channel with buffer size of one as mutex.
l := make(chan struct{}, 1)
Lock:
l <- struct{}{}
Unlock:
<-l
Try lock:
select {
case l <- struct{}{}:
// lock acquired
<-l
default:
// lock not acquired
}
Try with timeout:
select {
case l <- struct{}{}:
// lock acquired
<-l
case <-time.After(time.Minute):
// lock not acquired
}
I think you're asking several different things here:
Does this facility exist in the standard libray? No, it doesn't. You can probably find implementations elsewhere - this is possible to implement using the standard library (atomics, for example).
Why doesn't this facility exist in the standard library: the issue you mentioned in the question is one discussion. There are also several discussions on the go-nuts mailing list with several Go code developers contributing: link 1, link 2. And it's easy to find other discussions by googling.
How can I design my program such that I won't need this?
The answer to (3) is more nuanced and depends on your exact issue. Your question already says
It is possible to implement it as a worker queue (with channels and
stuff), but in that case it becomes more difficult to gauge and
utilize all available CPU
Without providing details on why it would be more difficult to utilize all CPUs, as opposed to checking for a mutex lock state.
In Go you usually want channels whenever the locking schemes become non-trivial. It shouldn't be slower, and it should be much more maintainable.
How about this package: https://github.com/viney-shih/go-lock . It use channel and semaphore (golang.org/x/sync/semaphore) to solve your problem.
go-lock implements TryLock, TryLockWithTimeout and TryLockWithContext functions in addition to Lock and Unlock. It provides flexibility to control the resources.
Examples:
package main
import (
"fmt"
"time"
"context"
lock "github.com/viney-shih/go-lock"
)
func main() {
casMut := lock.NewCASMutex()
casMut.Lock()
defer casMut.Unlock()
// TryLock without blocking
fmt.Println("Return", casMut.TryLock()) // Return false
// TryLockWithTimeout without blocking
fmt.Println("Return", casMut.TryLockWithTimeout(50*time.Millisecond)) // Return false
// TryLockWithContext without blocking
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
defer cancel()
fmt.Println("Return", casMut.TryLockWithContext(ctx)) // Return false
// Output:
// Return false
// Return false
// Return false
}
PMutex from package https://github.com/myfantasy/mfs
PMutex implements RTryLock(ctx context.Context) and TryLock(ctx context.Context)
// ctx - some context
ctx := context.Background()
mx := mfs.PMutex{}
isLocked := mx.TryLock(ctx)
if isLocked {
// DO Something
mx.Unlock()
} else {
// DO Something else
}

How to use context.WithCancel and context.WithTimeout API together, and is it necessary?

Now I did something like this:
func contextHandler(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx, cancel := context.WithCancel(r.Context())
ctx, cancel = context.WithTimeout(ctx, config.EnvConfig.RequestTimeout)
defer cancel()
if cn, ok := w.(http.CloseNotifier); ok {
go func(done <-chan struct{}, closed <-chan bool) {
select {
case <-done:
case <-closed:
logger.Debug("message", "client connection has gone away, request will be cancelled")
cancel()
}
}(ctx.Done(), cn.CloseNotify())
}
h.ServeHTTP(w, r.WithContext(ctx))
})
}
Pls pay attention to these two lines:
ctx, cancel := context.WithCancel(r.Context())
ctx, cancel = context.WithTimeout(ctx, config.EnvConfig.RequestTimeout)
According to my tests: deliberately kill the client request and deliberately make the request exceed the deadline, both are working fine(i mean can receive the cancellation signal and timeout signal as expected), just my concern is: the latter cancel function will override the previous one returned by the context.WithCancel(r.Context()), so:
Is it a proper way to use these two APIs together like this?
Is it even necessary to use these two APIs together?
Please help to explain.
Because the CancelFunc returned from your WithCancel call is being immediately overwritten, this causes a resource (i.e. memory) leak in your program. From the context documentation:
The WithCancel, WithDeadline, and WithTimeout functions take a Context (the parent) and return a derived Context (the child) and a CancelFunc. Calling the CancelFunc cancels the child and its children, removes the parent's reference to the child, and stops any associated timers. Failing to call the CancelFunc leaks the child and its children until the parent is canceled or the timer fires.
Removing the WithCancel context from your code will fix this problem.
Additionally, cancellation of the HTTP request is managed by the HTTP server, as described in the http.Request.Context method documentation:
For incoming server requests, the context is canceled when the client's connection closes, the request is canceled (with HTTP/2), or when the ServeHTTP method returns.
When the server cancels the request context, all child contexts will be cancelled.
You can just use WithTimeout(), instead of using both APIs, because WithTimeout() returns you a context.CancelFunc just as WithCancel() does, which could be called at any time to cancel the target process/routine. Of course, the cancellation should before hitting the deadline set by WithTimeout().
So,
Is it a proper way to use these two APIs together like this?
Is it even necessary to use these two APIs together?
No, no need to use both, use returned context.CancelFunc by any API in package context.

Should I care about providing asynchronous calls in my go library?

I am developing a simple go library for jsonrpc over http.
There is the following method:
rpcClient.Call("myMethod", myParam1, myParam2)
This method internally does a http.Get() and returns the result or an error (tuple).
This is of course synchron for the caller and returns when the Get() call returns.
Is this the way to provide libraries in go? Should I leave it to the user of my library to make it asynchron if she wants to?
Or should I provide a second function called:
rpcClient.CallAsync()
and return a channel here? Because channels cannot provide tuples I have to pack the (response, error) tuple in a struct and return that struct instead.
Does this make sense?
Otherwise the user would have to wrap every call in an ugly method like:
result := make(chan AsyncResponse)
go func() {
res, err := rpcClient.Call("myMethod", myParam1, myParam2)
result <- AsyncResponse{res, err}
}()
Is there a best practice for go libraries and asynchrony?
The whole point of go's execution model is to hide the asynchronous operations from the developer, and behave like a threaded model with blocking operations. Behind the scenes there are green-threads and asynchronous IO and a very sophisticated scheduler.
So no, you shouldn't provide an async API to your library. Networking in go is done in a pseudo-blocking way from the code's perspective, and you open as many goroutines as needed, as they are very cheap.
So your last example is the way to go, and I don't consider it ugly. Because it allows the developer to choose the concurrency model. In the context of an http server, where each command is handled in separate goroutine, I'd just call rpcClient.Call("myMethod", myParam1, myParam2).
Or if I want a fanout - I'll create fanout logic.
You can also create a convenience function for executing the call and returning on a channel:
func CallAsync(method, p1, p2) chan AsyncResponse {
result := make(chan AsyncResponse)
go func() {
res, err := rpcClient.Call(method, p1, p2)
result <- AsyncResponse{res, err}
}()
return result
}

Resources