How can I create a copy (a clone if you will) of a Go context that contains all of the values stored in the original, but does not get canceled when the original does?
It does seem like a valid use case to me. Say I have an http request and its context is canceled after the response is returned to a client and I need to run an async task in the end of this request in a separate goroutine that will most likely outlive the parent context.
func Handler(ctx context.Context) (interface{}, error) {
result := doStuff(ctx)
newContext := howDoICloneYou(ctx)
go func() {
doSomethingElse(newContext)
}()
return result
}
Can anyone advice how this is supposed to be done?
Of course I can keep track of all the values that may be put into the context, create a new background ctx and then just iterate through every possible value and copy... But that seems tedious and is hard to manage in a large codebase.
Since context.Context is an interface, you can simply create your own implementation that is never canceled:
import (
"context"
"time"
)
type noCancel struct {
ctx context.Context
}
func (c noCancel) Deadline() (time.Time, bool) { return time.Time{}, false }
func (c noCancel) Done() <-chan struct{} { return nil }
func (c noCancel) Err() error { return nil }
func (c noCancel) Value(key interface{}) interface{} { return c.ctx.Value(key) }
// WithoutCancel returns a context that is never canceled.
func WithoutCancel(ctx context.Context) context.Context {
return noCancel{ctx: ctx}
}
Can anyone advice how this is supposed to be done?
Yes. Don't do it.
If you need a different context, e.g. for your asynchronous background task then create a new context. Your incoming context and the one of your background task are unrelated and thus you must not try to reuse the incoming one.
If the unrelated new context needs some data from the original: Copy what you need and add what's new.
Related
I am playing around with Dependency Injection and HTTP servers in Golang, and are trying to wrap my head around how to make available a logger that contains information related to the current request that is being handled. What I want to be able to do is to call a logging function inside the current http.Handler and functions/methods that is being called inside from the http.Handler. Essentially leave a request ID in every log event, so I can trace exactly what happened.
My idea was to use Dependency Injection and define all the logic as methods on interfaces that would have the logger injected. I like this approach, since it makes the dependencies painstakingly clear.
I've seen other approaches, such as Zerolog's hlog package that puts the logger into the requests context.Context, and I could then modify my code to receive a context.Context as an argument, from where I could pull out the logger and call it. While this will work, I don't like it very much, in that it kinda hides away the dependency.
My issue is that I can't seem to come up with a way of using Dependency Injection in conjunction with a HTTP handling server, where the injected dependencies are concurrency safe.
As an example, I whipped up this code (ignore the fact that it doesn't use interfaces)
package main
import (
"crypto/rand"
"fmt"
mrand "math/rand"
"net/http"
"time"
)
func main() {
l := LoggingService{}
ss := SomethingService{
LoggingService: &l,
}
handler := Handler{
SomethingService: &ss,
LoggingService: &l,
}
http.Handle("/", &handler)
http.ListenAndServe("localhost:3333", nil)
}
type Handler struct {
SomethingService *SomethingService
LoggingService *LoggingService
}
func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
reqID := generateRequestID()
h.LoggingService.AddRequestID(reqID)
h.LoggingService.Log("A request")
out := h.SomethingService.DoSomething("nothing")
w.Write([]byte(out))
}
func generateRequestID() string {
buf := make([]byte, 10)
rand.Read(buf)
return fmt.Sprintf("%x", buf)
}
type SomethingService struct {
LoggingService *LoggingService
}
func (s *SomethingService) DoSomething(input string) string {
s.LoggingService.Log(fmt.Sprintf("Let's do something with %s", input))
mrand.Seed(time.Now().UnixNano())
n := mrand.Intn(3)
s.LoggingService.Log(fmt.Sprintf("Sleeping %d seconds...", n))
time.Sleep(time.Duration(n) * time.Second)
s.LoggingService.Log("Done")
return "something"
}
type LoggingService struct {
RequestID string
}
func (l *LoggingService) AddRequestID(id string) {
l.RequestID = id
}
func (l *LoggingService) Log(msg string) {
if l.RequestID != "" {
fmt.Printf("[%s] %s\n", l.RequestID, msg)
} else {
fmt.Printf("%s\n", msg)
}
}
The way that I've understood Go's HTTP server is that the handler is invoked, for each request, in it's own goroutine. Because the LoggingService is passed as a pointer, each of these goroutines are essentially accessing the same instance of LoggingService, which can result in a race condition where one request sets l.RequestID after which another request logs its requests, but ends up with the request ID from the first request.
I am sure others have run into a need to log something in a function that is called by a HTTP request and to be able to trace that log event to that specific request? However, I haven't found any examples where people are doing this.
I was thinking of delaying the instantiation of the dependencies until inside the handler. While that works, it has the side-effect of creating many instances of the dependencies - This is fine for the LoggingService, but something like the SomethingService could be talking to a DB, where it would be a waste to create a database client for every request (since most DB implementations have concurrency safe clients).
I'm pretty new in Go and need the answers to some of dilemmas while implementing small HTTP notifier than runs concurrently, sending message to the configured HTTP endpoint. For that purpose, I use the following structure:
type Notifier struct {
url string
waitingQueue *list.List
progress *list.List
errs chan NotificationError
active sync.WaitGroup
mu sync.Mutex
}
Here is the notify method:
func (hn *Notifier) Notify(message string) {
if hn.ProcessingCount() < hn.config.MaxActiveRequestsCount() && hn.checkIfOlderThanOldestWaiting(time.Now().Unix()) {
element := hn.addToProgressQueue(message)
hn.active.Add(1)
go hn.sendNotification(element)
} else {
hn.waitingQueue.PushBack(QueueItem{message, time.Now().UnixNano()})
}
}
And addToInProgressQueue:
func (hn *Notifier) addToProgressQueue(message string) *list.Element {
hn.mu.Lock()
defer hn.mu.Unlock()
queueItem := QueueItem{message, time.Now().UnixNano()}
element := hn.progress.PushBack(queueItem)
return element
}
I guess this won't work as expected for concurrent reads/writes of queue? Is it enought to use RWMutex instead of Mutex to ensure the locks are working properly?
The code of ProcessingCount
func (hn *Notifier) ProcessingCount() int {
hn.mu.Lock()
defer hn.mu.Unlock()
return hn.inProgress.Len()
}
Can there be a data race here?
Also, if you have some good resources on data race examples in Golang, it would be well appreciated.
Thanks in advance
I want to trace complete execution of an HTTP request in golang. For any non trivial operation the request will eventually call many different functions. I would like to have logs from the entire execution stack tagged with a unique request id. e.g.
http.Handle("/my-request", myrequesthandler)
func myrequestHandler(w http.ResponseWriter, r *http.Request) {
//debug print1
log.Printf("....")
myfunc()
}
func myfunc() {
//debug print2
log.Printf("....")
}
Here I need a way to identify print1 and print2 as part of same request. It looks like zerolog does have something like this, as described here. Like so:
....
c = c.Append(hlog.RequestIDHandler("req_id", "Request-Id"))
h := c.Then(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
hlog.FromRequest(r).Info().
Msg("Something happened")
}))
http.Handle("/", h)
But if I understand it right, it will involve passing request object to each and every function. Is this the idiomatic way to solve this problem? What are the alternatives?
Set a unique id on a context.Context as soon as the request is received, and pass that context down the call stack. This is what contexts are for.
[Context] carries deadlines, cancellation signals, and other request-scoped values across API boundaries and between processes.
Example:
// You could place this in a separate helper package to improve encapsulation
type ctxKey struct{}
func myRequestHandler(w http.ResponseWriter, r *http.Request) {
uniqueID := // generate a unique identifier
// create a context with the http.Request context as parent
ctx := context.WithValue(r.Context(), ctxKey{}, uniqueID)
foo(ctx, ...)
bar(ctx, ...)
}
func foo(ctx context.Context, ...) {
uniqueID := ctx.Value(ctxKey{})
// do something with the unique id
baz(ctx, ...)
}
In particular:
Create the context with *http.Request.Context() as parent. This way if the request is canceled, e.g. due to client disconnection, cancellation will propagate to your sub-context
Consider setting the context value using an unexported struct as key. Unexported structs defined in your package will never conflict with other keys. If you use strings as keys instead, any package could in theory use the same key and overwrite your value (or you could overwrite others' values). YMMV.
Pass your context as the first argument of any function in your call stack, as the package documentation recommends.
For tracing and logging across applications, you might want to look into opentracing. Propagation of tracers is still done with Contexts as outlined above.
you can use context.Context and set request id on it via middleware function.
example:
type requestIDKey struct{}
func requestIDSetter(next http.HandlerFunc) http.HandlerFunc {
return func(rw http.ResponseWriter, r *http.Request) {
// use provided request id from incoming request if any
reqID := r.Header.Get("X-Request-ID")
if reqID == "" {
// or use some generated string
reqID = uuid.New().String()
}
ctx := context.WithValue(r.Context(), requestIDKey{}, reqID)
next(rw, r.WithContext(ctx))
}
}
then you need to modify your logger to accept context.Context
example:
func printfWithContext(ctx context.Context, format string, v ...interface{}) {
reqID := ctx.Value(requestIDKey{}).(string)
log.Printf(reqID+": "+format, v...)
}
and finally apply to your code
http.HandleFunc("/my-request", requestIDSetter(myrequestHandler))
func myrequestHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
//debug print1
printfWithContext(ctx, "....1")
myfunc(ctx)
}
func myfunc(ctx context.Context) {
//debug print2
printfWithContext(ctx, "....2")
}
I am using Cucumber GoDog as a BDD test framework for gRPC microservice testing. GoDog does not come with any assertion helpers or utilities.
Does anyone here have experience adopting any of the existing assertion libraries like Testify/GoMega with GoDog?
As far as I know GoDog does not work on top of go test which is why I guess it's challenging to adopt any go test based assertion libraries like I mentioned. But I would still like to check here if anyone has experience doing so.
Here's a basic proof-of-concept using Testify:
package bdd
import (
"fmt"
"github.com/cucumber/godog"
"github.com/stretchr/testify/assert"
)
type scenario struct{}
func (_ *scenario) assert(a assertion, expected, actual interface{}, msgAndArgs ...interface{}) error {
var t asserter
a(&t, expected, actual, msgAndArgs...)
return t.err
}
func (sc *scenario) forcedFailure() error {
return sc.assert(assert.Equal, 1, 2)
}
type assertion func(t assert.TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool
type asserter struct {
err error
}
func (a *asserter) Errorf(format string, args ...interface{}) {
a.err = fmt.Errorf(format, args...)
}
func FeatureContext(s *godog.Suite) {
var sc scenario
s.Step("^forced failure$", sc.forcedFailure)
}
Feature: forced failure
Scenario: fail
Then forced failure
The key here is implementing Testify's assert.TestingT interface.
Here's a proof of concept with GoMega:
Register the GoMega Fail Handler before running any tests to have GoMega simply panic with the error message.
gomega.RegisterFailHandler(func(message string, _ ...int) {
panic(message)
})
Define a step fail handler to recover from any fails.
func failHandler(err *error) {
if r := recover(); r != nil {
*err = fmt.Errorf("%s", r)
}
}
Now at the beginning of every step definition defer running the failHandler like so:
func shouldBeBar(foo string) (err error) {
defer failHandler(&err)
Expect(foo).Should(Equal("bar"))
return err
}
Now if/when the first of our GoMega assertion fails, the step function will run the failHandler and return the GoMega failure message (if there is one). Notice we are using named result parameters to return the error, see How to return a value in a Go function that panics?
sorry to see you're still working on this.
As we chatted before, there is a way to get it working via the link I sent you before, it's just not necessarily a beginner friendly setup as you mentioned in Slack. Perhaps this is something us contributors can look into in the future, it's just not something that's set up currently and since we're mostly volunteers, setting up timelines on new features can be tough.
My recommendation for the time being would be to do assertions via if statements. If you don't want them in your test code specifically, then you can make a quick wrapper function and call them that way.
Had the same question today, trying to integrate gomega with godog. And thanks to Go's simplicity I managed to get something to work (this is my third day with Go :-). Allthough I think this isn't going to work in real world projects, yet, I'd like to share my thoughts on this.
Coming from Rails/RSpec, I like having compact test cases/steps without boiler plate code. So I tried to put handling failures out of the steps and into before/after hooks:
func InitializeGomegaForGodog(ctx *godog.ScenarioContext) {
var testResult error
ctx.StepContext().Before(func(ctx context.Context, st *godog.Step) (context.Context, error) {
testResult = nil
return ctx, nil
})
ctx.StepContext().After(func(ctx context.Context, st *godog.Step, status godog.StepResultStatus, err error) (context.Context, error) {
return ctx, testResult
})
gomega.RegisterFailHandler(func(message string, callerSkip ...int) {
// remember only the expectation failed first
// anything thereafter is not to be believed
if testResult == nil {
testResult = fmt.Errorf(message)
}
})
}
func InitializeScenario(ctx *godog.ScenarioContext) {
InitializeGomegaForGodog(ctx)
ctx.Step(`^incrementing (\d+)$`, incrementing)
ctx.Step(`^result is (\d+)$`, resultIs)
}
Of course, this aproach will not stop steps where expectations didn't match. So there's a risk of having undefined behavior in the rest of the step. But the steps' implementations are quite simple with this approach:
func resultIs(arg1 int) {
gomega.Expect(1000).To(gomega.Equal(arg1))
}
I'm learning go, and I would like to explore some patterns.
I would like to build a Registry component which maintains a map of some stuff, and I want to provide a serialized access to it:
Currently I ended up with something like this:
type JobRegistry struct {
submission chan JobRegistrySubmitRequest
listing chan JobRegistryListRequest
}
type JobRegistrySubmitRequest struct {
request JobSubmissionRequest
response chan Job
}
type JobRegistryListRequest struct {
response chan []Job
}
func NewJobRegistry() (this *JobRegistry) {
this = &JobRegistry{make(chan JobRegistrySubmitRequest, 10), make(chan JobRegistryListRequest, 10)}
go func() {
jobMap := make(map[string] Job)
for {
select {
case sub := <- this.submission:
job := MakeJob(sub.request) // ....
jobMap[job.Id] = job
sub.response <- job.Id
case list := <- this.listing:
res := make([]Job, 0, 100)
for _, v := range jobMap {
res = append(res, v)
}
list.response <- res
}
/// case somechannel....
}
}()
return
}
Basically, I encapsulate each operation inside a struct, which carries
the parameters and a response channel.
Then I created helper methods for end users:
func (this *JobRegistry) List() ([]Job, os.Error) {
res := make(chan []Job, 1)
req := JobRegistryListRequest{res}
this.listing <- req
return <-res, nil // todo: handle errors like timeouts
}
I decided to use a channel for each type of request in order to be type safe.
The problem I see with this approach are:
A lot of boilerplate code and a lot of places to modify when some param/return type changes
Have to do weird things like create yet another wrapper struct in order to return errors from within the handler goroutine. (If I understood correctly there are no tuples, and no way to send multiple values in a channel, like multi-valued returns)
So, I'm wondering whether all this makes sense, or rather just get back to good old locks.
I'm sure that somebody will find some clever way out using channels.
I'm not entirely sure I understand you, but I'll try answering never the less.
You want a generic service that executes jobs sent to it. You also might want the jobs to be serializable.
What we need is an interface that would define a generic job.
type Job interface {
Run()
Serialize(io.Writer)
}
func ReadJob(r io.Reader) {...}
type JobManager struct {
jobs map[int] Job
jobs_c chan Job
}
func NewJobManager (mgr *JobManager) {
mgr := &JobManager{make(map[int]Job),make(chan Job,JOB_QUEUE_SIZE)}
for {
j,ok := <- jobs_c
if !ok {break}
go j.Run()
}
}
type IntJob struct{...}
func (job *IntJob) GetOutChan() chan int {...}
func (job *IntJob) Run() {...}
func (job *IntJob) Serialize(o io.Writer) {...}
Much less code, and roughly as useful.
About signaling errors with an axillary stream, you can always use a helper function.
type IntChanWithErr struct {
c chan int
errc chan os.Error
}
func (ch *IntChanWithErr) Next() (v int,err os.Error) {
select {
case v := <- ch.c // not handling closed channel
case err := <- ch.errc
}
return
}