I'm using this to receive SNMP traps: https://github.com/soniah/gosnmp
Now, lets say I want to programmatically break out of the (taken from here):
err := tl.Listen("0.0.0.0:9162")
What are my best approaches to this?
I'm somewhat new to Golang and didnt find a way to break out of a goroutine that I have no way of modifying ("3rd party").
Thanks,
Short answer: You can't. There's no way to kill a goroutine (short of killing the entire program) from outside the goroutine.
Long answer: A goroutine can listen for some sort of "terminate" signal (via channels, signals, or any other mechanism). But ultimately, the goroutine must terminate from within.
Looking at the library in your example, it appears this functionality is not provided.
Standard https://golang.org/pkg/net/#Conn interface provides special methods SetDeadline (together with SetReadDeadline and SetWriteDeadline) to set a hard connection break time for staled connections. As I see in the source code:
type GoSNMP struct {
// Conn is net connection to use, typically established using GoSNMP.Connect()
Conn net.Conn
...
// Timeout is the timeout for the SNMP Query
Timeout time.Duration
...
net.Conn interface is exported - so you may try to get direct access to it to set up a deadline.
type TrapListener struct {
OnNewTrap func(s *SnmpPacket, u *net.UDPAddr)
Params *GoSNMP
...
}
In its turn TrapListener exports GoSNMP struct so you may have access to it. Try this:
tl := TrapListener{...}
tl.Params.Conn.SetDeadline(time.Now().Add(1*time.Second))
tl.Listen(...)
However this line disensures me - looks like it doesn't use stored connection and its options:
func (t *TrapListener) Listen(addr string) (err error) {
...
conn, err := net.ListenUDP("udp", udpAddr)
....
}
But you may try :)
Related
I am unable to figure out why the method requires you to specifically provide a buffered channel.
From the documentation,
func (*Client) Go
func (client *Client) Go(serviceMethod string, args interface{}, reply interface{}, done chan *Call) *Call
Go invokes the function asynchronously. It returns the Call structure
representing the invocation. The done channel will signal when the
call is complete by returning the same Call object. If done is nil, Go
will allocate a new channel. If non-nil, done must be buffered or Go
will deliberately crash.
LeGEC alluded to this in their comment.
Digging in further you will find this bit in client.go
func (call *Call) done() {
select {
case call.Done <- call:
// ok
default:
// We don't want to block here. It is the caller's responsibility to make
// sure the channel has enough buffer space. See comment in Go().
if debugLog {
log.Println("rpc: discarding Call reply due to insufficient Done chan capacity")
}
}
}
From what you can see here is that the library expects the call to be completely asynchronous. This means the done channel must have enough capacity to completely decouple the two processes (i.e. no blocking at all).
Further when the select statement is used as seen, it is the idiomatic way to do a non-blocking channel operation.
How can one only attempt to acquire a mutex-like lock in go, either aborting immediately (like TryLock does in other implementations) or by observing some form of deadline (basically LockBefore)?
I can think of 2 situations right now where this would be greatly helpful and where I'm looking for some sort of solution. The first one is: a CPU-heavy service which receives latency sensitive requests (e.g. a web service). In this case you would want to do something like the RPCService example below. It is possible to implement it as a worker queue (with channels and stuff), but in that case it becomes more difficult to gauge and utilize all available CPU. It is also possible to just accept that by the time you acquire the lock your code may already be over deadline, but that is not ideal as it wastes some amount of resources and means we can't do things like a "degraded ad-hoc response".
/* Example 1: LockBefore() for latency sensitive code. */
func (s *RPCService) DoTheThing(ctx context.Context, ...) ... {
if s.someObj[req.Parameter].mtx.LockBefore(ctx.Deadline()) {
defer s.someObj[req.Parameter].mtx.Unlock()
... expensive computation based on internal state ...
} else {
return s.cheapCachedResponse[req.Parameter]
}
}
Another case is when you have a bunch of objects which should be touched, but which may be locked, and where touching them should complete within a certain amount of time (e.g. updating some stats). In this case you could also either use LockBefore() or some form of TryLock(), see the Stats example below.
/* Example 2: TryLock() for updating stats. */
func (s *StatsObject) updateObjStats(key, value interface{}) {
if s.someObj[key].TryLock() {
defer s.someObj[key].Unlock()
... update stats ...
... fill in s.cheapCachedResponse ...
}
}
func (s *StatsObject) UpdateStats() {
s.someObj.Range(s.updateObjStats)
}
For ease of use, let's assume that in the above case we're talking about the same s.someObj. Any object may be blocked by DoTheThing() operations for a long time, which means we would want to skip it in updateObjStats. Also, we would want to make sure that we return the cheap response in DoTheThing() in case we can't acquire a lock in time.
Unfortunately, sync.Mutex only and exclusively has the functions Lock() and Unlock(). There is no way to potentially acquire a lock. Is there some easy way to do this instead? Am I approaching this class of problems from an entirely wrong angle, and is there a different, more "go"ish way to solve them? Or will I have to implement my own Mutex library if I want to solve these? I am aware of issue 6123 which seems to suggest that there is no such thing and that the way I'm approaching these problems is entirely un-go-ish.
Use a channel with buffer size of one as mutex.
l := make(chan struct{}, 1)
Lock:
l <- struct{}{}
Unlock:
<-l
Try lock:
select {
case l <- struct{}{}:
// lock acquired
<-l
default:
// lock not acquired
}
Try with timeout:
select {
case l <- struct{}{}:
// lock acquired
<-l
case <-time.After(time.Minute):
// lock not acquired
}
I think you're asking several different things here:
Does this facility exist in the standard libray? No, it doesn't. You can probably find implementations elsewhere - this is possible to implement using the standard library (atomics, for example).
Why doesn't this facility exist in the standard library: the issue you mentioned in the question is one discussion. There are also several discussions on the go-nuts mailing list with several Go code developers contributing: link 1, link 2. And it's easy to find other discussions by googling.
How can I design my program such that I won't need this?
The answer to (3) is more nuanced and depends on your exact issue. Your question already says
It is possible to implement it as a worker queue (with channels and
stuff), but in that case it becomes more difficult to gauge and
utilize all available CPU
Without providing details on why it would be more difficult to utilize all CPUs, as opposed to checking for a mutex lock state.
In Go you usually want channels whenever the locking schemes become non-trivial. It shouldn't be slower, and it should be much more maintainable.
How about this package: https://github.com/viney-shih/go-lock . It use channel and semaphore (golang.org/x/sync/semaphore) to solve your problem.
go-lock implements TryLock, TryLockWithTimeout and TryLockWithContext functions in addition to Lock and Unlock. It provides flexibility to control the resources.
Examples:
package main
import (
"fmt"
"time"
"context"
lock "github.com/viney-shih/go-lock"
)
func main() {
casMut := lock.NewCASMutex()
casMut.Lock()
defer casMut.Unlock()
// TryLock without blocking
fmt.Println("Return", casMut.TryLock()) // Return false
// TryLockWithTimeout without blocking
fmt.Println("Return", casMut.TryLockWithTimeout(50*time.Millisecond)) // Return false
// TryLockWithContext without blocking
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
defer cancel()
fmt.Println("Return", casMut.TryLockWithContext(ctx)) // Return false
// Output:
// Return false
// Return false
// Return false
}
PMutex from package https://github.com/myfantasy/mfs
PMutex implements RTryLock(ctx context.Context) and TryLock(ctx context.Context)
// ctx - some context
ctx := context.Background()
mx := mfs.PMutex{}
isLocked := mx.TryLock(ctx)
if isLocked {
// DO Something
mx.Unlock()
} else {
// DO Something else
}
I'm writing a package to control a Canon DSLR using their EDSDK DLL from Go.
This is a personal project for a photo booth to use at our wedding at my partners request, which I'll be happy to post on GitHub when complete :).
Looking at the examples of using the SDK elsewhere, it isn't threadsafe and uses thread-local resources, so I'll need to make sure I'm calling it from a single thread during usage. While not ideal, it looks like Go provides a "runtime.LockOSThread" function for doing just that, although this does get called by the core DLL interop code itself, so I'll have to wait and find out if that interferes or not.
I want the rest of the application to be able to call the SDK using a higher level interface without worrying about the threading, so I need a way to pass function call requests to the locked thread/Goroutine to execute there, then pass the results back to the calling function outside of that Goroutine.
So far, I've come up with this working example of using very broad function definitions using []interface{} arrays and passing back and forward via channels. This would take a lot of mangling of input/output data on every call to do type assertions back out of the interface{} array, even if we know what we should expect for each function ahead of time, but it looks like it'll work.
Before I invest a lot of time doing it this way for possibly the worst way to do it - does anyone have any better options?
package edsdk
import (
"fmt"
"runtime"
)
type CanonSDK struct {
FChan chan functionCall
}
type functionCall struct {
Function func([]interface{}) []interface{}
Arguments []interface{}
Return chan []interface{}
}
func NewCanonSDK() (*CanonSDK, error) {
c := &CanonSDK {
FChan: make(chan functionCall),
}
go c.BackgroundThread(c.FChan)
return c, nil
}
func (c *CanonSDK) BackgroundThread(fcalls <-chan functionCall) {
runtime.LockOSThread()
for f := range fcalls {
f.Return <- f.Function(f.Arguments)
}
runtime.UnlockOSThread()
}
func (c *CanonSDK) TestCall() {
ret := make(chan []interface{})
f := functionCall {
Function: c.DoTestCall,
Arguments: []interface{}{},
Return: ret,
}
c.FChan <- f
results := <- ret
close(ret)
fmt.Printf("%#v", results)
}
func (c *CanonSDK) DoTestCall([]interface{}) []interface{} {
return []interface{}{ "Test", nil }
}
For similar embedded projects I've played with, I tend to create a single goroutine worker that listens on a channel to perform all the work over that USB device. And any results sent back out on another channel.
Talk to the device with channels only in Go in a one-way exchange. LIsten for responses from the other channel.
Since USB is serial and polling, I had to setup a dedicated channel with another goroutine that justs picks items off the channel when they were pushed into it from the worker goroutine that just looped.
I have a struct called Hub with a Run() method which is executed in its own goroutine. This method sequentially handles incoming messages. Messages arrive concurrently from multiple producers (separate goroutines). Of course I use a channel to accomplish this task. But now I want to hide the Hub behind an interface to be able to choose from its implementations. So, using a channel as a simple Hub's field isn't appropriate.
package main
import "fmt"
import "time"
type Hub struct {
msgs chan string
}
func (h *Hub) Run() {
for {
msg, hasMore := <- h.msgs
if !hasMore {
return
}
fmt.Println("hub: msg received", msg)
}
}
func (h *Hub) SendMsg(msg string) {
h.msgs <- msg
}
func send(h *Hub, prefix string) {
for i := 0; i < 5; i++ {
fmt.Println("main: sending msg")
h.SendMsg(fmt.Sprintf("%s %d", prefix, i))
}
}
func main() {
h := &Hub{make(chan string)}
go h.Run()
for i := 0; i < 10; i++ {
go send(h, fmt.Sprintf("msg sender #%d", i))
}
time.Sleep(time.Second)
}
So I've introduced Hub.SendMsg(msg string) function that just calls h.msgs <- msg and which I can add to the HubInterface. And as a Go-newbie I wonder, is it safe from the concurrency perspective? And if so - is it a common approach in Go?
Playground here.
Channel send semantics do not change when you move the send into a method. Andrew's answer points out that the channel needs to be created with make to send successfully, but that was always true, whether or not the send is inside a method.
If you are concerned about making sure callers can't accidentally wind up with invalid Hub instances with a nil channel, one approach is to make the struct type private (hub) and have a NewHub() function that returns a fully initialized hub wrapped in your interface type. Since the struct is private, code in other packages can't try to initialize it with an incomplete struct literal (or any struct literal).
That said, it's often possible to create invalid or nonsense values in Go and that's accepted: net.IP("HELLO THERE BOB") is valid syntax, or net.IP{}. So if you think it's better to expose your Hub type go ahead.
Easy answer
Yes
Better answer
No
Channels are great for emitting data from unknown go-routines. They do so safely, however I would recommend being careful with a few parts. In the listed example the channel is created with the construction of the struct by the consumer (and not not by a consumer).
Say the consumer creates the Hub like the following: &Hub{}. Perfectly valid... Apart from the fact that all the invokes of SendMsg() will block for forever. Luckily you placed those in their own go-routines. So you're still fine right? Wrong. You are now leaking go-routines. Seems fine... until you run this for a period of time. Go encourages you to have valid zero values. In this case &Hub{} is not valid.
Ensuring SendMsg() won't block could be achieved via a select{} however you then have to decide what to do when you encounter the default case (e.g. throw data away). The channel could block for more reasons than bad setup too. Say later you do more than simply print the data after reading from the channel. What if the read gets very slow, or blocks on IO. You then will start pushing back on the producers.
Ultimately, channels allow you to not think much about concurrency... However if this is something of high-throughput, then you have quite a bit to consider. If it is production code, then you need to understand that your API here involves SendMsg() blocking.
I am developing a simple go library for jsonrpc over http.
There is the following method:
rpcClient.Call("myMethod", myParam1, myParam2)
This method internally does a http.Get() and returns the result or an error (tuple).
This is of course synchron for the caller and returns when the Get() call returns.
Is this the way to provide libraries in go? Should I leave it to the user of my library to make it asynchron if she wants to?
Or should I provide a second function called:
rpcClient.CallAsync()
and return a channel here? Because channels cannot provide tuples I have to pack the (response, error) tuple in a struct and return that struct instead.
Does this make sense?
Otherwise the user would have to wrap every call in an ugly method like:
result := make(chan AsyncResponse)
go func() {
res, err := rpcClient.Call("myMethod", myParam1, myParam2)
result <- AsyncResponse{res, err}
}()
Is there a best practice for go libraries and asynchrony?
The whole point of go's execution model is to hide the asynchronous operations from the developer, and behave like a threaded model with blocking operations. Behind the scenes there are green-threads and asynchronous IO and a very sophisticated scheduler.
So no, you shouldn't provide an async API to your library. Networking in go is done in a pseudo-blocking way from the code's perspective, and you open as many goroutines as needed, as they are very cheap.
So your last example is the way to go, and I don't consider it ugly. Because it allows the developer to choose the concurrency model. In the context of an http server, where each command is handled in separate goroutine, I'd just call rpcClient.Call("myMethod", myParam1, myParam2).
Or if I want a fanout - I'll create fanout logic.
You can also create a convenience function for executing the call and returning on a channel:
func CallAsync(method, p1, p2) chan AsyncResponse {
result := make(chan AsyncResponse)
go func() {
res, err := rpcClient.Call(method, p1, p2)
result <- AsyncResponse{res, err}
}()
return result
}