I'm building a simple client server app which I want to trace across the client execution to a server microservice that calls a second server microservice.
Simply speaking, it's not more complicated than CLI -> ServiceA -> ServiceB.
The challenge I'm having is how to serialize the context - most of the docs I've looked at appear to do some form of automated HTTP header injection (e.g. https://opentelemetry.lightstep.com/core-concepts/context-propagation/) , but I do not have access to that. I need to serialize (I think) the context of the trace/span in the client and push it to the server, where I'll rehydrate it. (Mind you, I'd love this to be simpler, but I cannot figure it out).
So the object looks like this (called "job"):
args := &types.SubmitArgs{
SerializedOtelContext: serializedOtelContext,
}
job := &types.Job{}
tracer := otel.GetTracerProvider().Tracer("myservice.org")
_, span := tracer.Start(ctx, "Submitting Job to RPC")
err := system.JsonRpcMethod(rpcHost, rpcPort, "Submit", args, job)
The function to submit to JsonRpcMethod is here:
func JsonRpcMethod(
host string,
port int,
method string,
req, res interface{},
) error {
client, err := rpc.DialHTTP("tcp", fmt.Sprintf("%s:%d", host, port))
if err != nil {
return fmt.Errorf("Error in dialing. %s", err)
}
return client.Call(fmt.Sprintf("JobServer.%s", method), req, res)
}
And the function that receives it is here:
func (server *JobServer) Submit(args *types.SubmitArgs, reply *types.Job) error {
//nolint
job, err := server.RequesterNode.Scheduler.SubmitJob(args.Spec, args.Deal)
if err != nil {
return err
}
*reply = *job
return nil
}
My question is how do I, in the receiving function ("Submit" above) extract the trace/span from the sender?
Here is a small program to illustrate the usage. Hope this makes it clear.
package main
import (
"context"
"fmt"
"go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
"go.opentelemetry.io/otel/propagation"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
)
func main() {
// common init
// You may also want to set them as globals
exp, _ := stdouttrace.New(stdouttrace.WithPrettyPrint())
bsp := sdktrace.NewSimpleSpanProcessor(exp) // You should use batch span processor in prod
tp := sdktrace.NewTracerProvider(
sdktrace.WithSampler(sdktrace.AlwaysSample()),
sdktrace.WithSpanProcessor(bsp),
)
propgator := propagation.NewCompositeTextMapPropagator(propagation.TraceContext{}, propagation.Baggage{})
ctx, span := tp.Tracer("foo").Start(context.Background(), "parent-span-name")
defer span.End()
// Serialize the context into carrier
carrier := propagation.MapCarrier{}
propgator.Inject(ctx, carrier)
// This carrier is sent accros the process
fmt.Println(carrier)
// Extract the context and start new span as child
// In your receiving function
parentCtx := propgator.Extract(context.Background(), carrier)
_, childSpan := tp.Tracer("foo").Start(parentCtx, "child-span-name")
childSpan.AddEvent("some-dummy-event")
childSpan.End()
}
Related
This is kindof an extension of my previous question Reuse log client in interceptor for Golang grpc server method.
Basically I have a grpc server (written in Go) that exposes three APIs:
SubmitJob
CancelJob
GetJobStatus
I am using Datadog to log metrics, so in each API, I have code like:
func (s *myServer) submitJob(ctx context.Context, request *submitJobRequest) (*submitJobResponse, error) {
s.dd_client.LogRequestCount("SubmitJob")
start_time := time.Now()
defer s.dd_client.LogRequestDuration("SubmitJob", time.Since(start_time))
sth, err:= someFunc1()
if err != nil {
s.dd_client.LogErrorCount("SubmitJob")
return nil, err
}
resp, err:= someFunc2(sth)
if err != nil {
s.dd_client.LogErrorCount("SubmitJob")
return nil, err
}
return resp, nil
}
This approach works but have several problems:
The LogRequestCount and LogRequestDuration is duplicated among all APIs
I am calling LogErrorCount in every places where errors are returned, which seems ugly
I learned that interceptor might help with logging, so I wrote an interceptor like
func (s *myServer) UnaryInterceptor(ctx context.Context,
request interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (interface{}, error) {
// Get method name e.g. SubmitJob, CancelJob, GetJobStatus
tmp := strings.Split(info.FullMethod, "/")
method := tmp[len(tmp)-1]
s.dd_client.LogRequestCount(method)
start_time := time.Now()
resp, err := handler(ctx, request)
server.dd_client.LogRequestDuration(method)
if err != nil {
s.dd_client.LogErrorCount(method)
}
return response, err
}
And set it in main() function:
server := grpc.NewServer(grpc.UnaryInterceptor(my_server.UnaryInterceptor))
This works for me, but I noticed two problems:
Here the interceptor takes myServer as a receiver, is this a good practice? I am doing this coz I want to reuse the Datadog client (dd_client) created within myServer. Other options would be create the Datadog client singleton which used by both interceptor and myServer, or create a interceptor struct and create a separate Datadog client there.
The interceptor could only handle logging for generic metrics e.g. request count, duration. But there could be metrics specific for each API, which means I still need to have logging related code in each API implementation. Then the question is, should I still use interceptor? Coz now the logging related code are splitted into two places (API implementation and interceptor).
Update 1: it seems that using a context tied to the HTTP request may lead to the 'context canceled' error. However, using the context.Background() as the parent seems to work fine.
// This works, no 'context canceled' errors
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Second)
// However, this creates 'context canceled' errors under mild load
// ctx, cancel := context.WithTimeout(r.Context(), 100*time.Second)
defer cancel()
app.Insert(ctx, record)
(updated code sample below to produce a self-contained example for repro)
In go, I have an http handler like the following code. On the first HTTP request to this endpoint I get a context cancelled error. However, the data is actually inserted into the database. On subsequent requests to this endpoint, no such error is given and data is also successfully inserted into the database.
Question: Am I setting up and passing the context correctly between the http handler and pgx QueryRow method? (if not is there a better way?)
If you copy this code into main.go and run go run main.go, go to localhost:4444/create and hold ctrl-R to produce a mild load, you should see some context canceled errors produced.
package main
import (
"context"
"fmt"
"log"
"math/rand"
"net/http"
"time"
"github.com/jackc/pgx/v4/pgxpool"
)
type application struct {
DB *pgxpool.Pool
}
type Task struct {
ID string
Name string
Status string
}
//HTTP GET /create
func (app *application) create(w http.ResponseWriter, r *http.Request) {
fmt.Println(r.URL.Path, time.Now())
task := &Task{Name: fmt.Sprintf("Task #%d", rand.Int()%1000), Status: "pending"}
// -------- problem code here ----
// This line works and does not generate any 'context canceled' errors
//ctx, cancel := context.WithTimeout(context.Background(), 100*time.Second)
// However, this linegenerates 'context canceled' errors under mild load
ctx, cancel := context.WithTimeout(r.Context(), 100*time.Second)
// -------- end -------
defer cancel()
err := app.insertTask(ctx, task)
if err != nil {
fmt.Println("insert error:", err)
return
}
fmt.Fprintf(w, "%+v", task)
}
func (app *application) insertTask(ctx context.Context, t *Task) error {
stmt := `INSERT INTO task (name, status) VALUES ($1, $2) RETURNING ID`
row := app.DB.QueryRow(ctx, stmt, t.Name, t.Status)
err := row.Scan(&t.ID)
if err != nil {
return err
}
return nil
}
func main() {
rand.Seed(time.Now().UnixNano())
db, err := pgxpool.Connect(context.Background(), "postgres://test:test123#localhost:5432/test")
if err != nil {
log.Fatal(err)
}
log.Println("db conn pool created")
stmt := `CREATE TABLE IF NOT EXISTS public.task (
id uuid NOT NULL DEFAULT gen_random_uuid(),
name text NULL,
status text NULL,
PRIMARY KEY (id)
); `
_, err = db.Exec(context.Background(), stmt)
if err != nil {
log.Fatal(err)
}
log.Println("task table created")
defer db.Close()
app := &application{
DB: db,
}
mux := http.NewServeMux()
mux.HandleFunc("/create", app.create)
log.Println("http server up at localhost:4444")
err = http.ListenAndServe(":4444", mux)
if err != nil {
log.Fatal(err)
}
}
TLDR: Using r.Context() works fine in production, testing using Browser is a problem.
An HTTP request gets its own context that is cancelled when the request is finished. That is a feature, not a bug. Developers are expected to use it and gracefully shutdown execution when the request is interrupted by client or timeout. For example, a cancelled request can mean that client never see the response (transaction result) and developer can decide to roll back that transaction.
In production, request cancelation does not happen very often for normally design/build APIs. Typically, flow is controlled by the server and the server returns the result before the request is cancelled.
Multiple Client requests does not affect each other because they get independent go-routine and context. Again, we are talking about happy path for normally designed/build applications. Your sample app looks good and should work fine.
The problem is how we test the app. Instead of creating multiple independent requests, we use Browser and refresh a single browser session. I did not check what exactly is going on, but assume that the Browser terminates the existing request in order to run a new one when you click ctrl-R. The server sees that request termination and communicates it to your code as context cancelation.
Try to test your code using curl or some other script/utility that creates independent requests. I am sure you will not see cancelations in that case.
I am new in programming and have no idea about using the the token generate client api function in the source code from my client side golang program. Looking for some advice. Thank you so much.
Source code package: https://pkg.go.dev/github.com/gravitational/teleport/api/client#Client.UpsertToken
Function Source Code:
func (c *Client) UpsertToken(ctx context.Context, token types.ProvisionToken) error {
tokenV2, ok := token.(*types.ProvisionTokenV2)
if !ok {
return trace.BadParameter("invalid type %T", token)
}
_, err := c.grpc.UpsertToken(ctx, tokenV2, c.callOpts...)
return trail.FromGRPC(err)
}
My code:
package main
import (
"context"
"crypto/tls"
"fmt"
"log"
"os"
"strings"
"time"
"github.com/gravitational/teleport/api/client"
"github.com/gravitational/teleport/api/client/proto"
"google.golang.org/grpc"
)
// Client is a gRPC Client that connects to a Teleport Auth server either
// locally or over ssh through a Teleport web proxy or tunnel proxy.
//
// This client can be used to cover a variety of Teleport use cases,
// such as programmatically handling access requests, integrating
// with external tools, or dynamically configuring Teleport.
type Client struct {
// c contains configuration values for the client.
//c Config
// tlsConfig is the *tls.Config for a successfully connected client.
tlsConfig *tls.Config
// dialer is the ContextDialer for a successfully connected client.
//dialer ContextDialer
// conn is a grpc connection to the auth server.
conn *grpc.ClientConn
// grpc is the gRPC client specification for the auth server.
grpc proto.AuthServiceClient
// closedFlag is set to indicate that the connnection is closed.
// It's a pointer to allow the Client struct to be copied.
closedFlag *int32
// callOpts configure calls made by this client.
callOpts []grpc.CallOption
}
/*
type ProvisionToken interface {
Resource
// SetMetadata sets resource metatada
SetMetadata(meta Metadata)
// GetRoles returns a list of teleport roles
// that will be granted to the user of the token
// in the crendentials
GetRoles() SystemRoles
// SetRoles sets teleport roles
SetRoles(SystemRoles)
// GetAllowRules returns the list of allow rules
GetAllowRules() []*TokenRule
// GetAWSIIDTTL returns the TTL of EC2 IIDs
GetAWSIIDTTL() Duration
// V1 returns V1 version of the resource
V2() *ProvisionTokenSpecV2
// String returns user friendly representation of the resource
String() string
}
type ProvisionTokenSpecV2 struct {
// Roles is a list of roles associated with the token,
// that will be converted to metadata in the SSH and X509
// certificates issued to the user of the token
Roles []SystemRole `protobuf:"bytes,1,rep,name=Roles,proto3,casttype=SystemRole" json:"roles"`
Allow []*TokenRule `protobuf:"bytes,2,rep,name=allow,proto3" json:"allow,omitempty"`
AWSIIDTTL Duration `protobuf:"varint,3,opt,name=AWSIIDTTL,proto3,casttype=Duration" json:"aws_iid_ttl,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
*/
func main() {
ctx := context.Background()
args := os.Args[1:]
nodeType := ""
if len(args) > 0 {
nodeType = args[0]
}
proxyAddress := os.Getenv("TELEPORT_PROXY")
if len(proxyAddress) <= 0 {
proxyAddress = "proxy.teleport.example.local:443"
}
clt, err := client.New(ctx, client.Config{
Addrs: []string{
"proxy.teleport.example.local:443",
"proxy.teleport.example.local:3025",
"proxy.teleport.example.local:3024",
"proxy.teleport.example.local:3080",
},
Credentials: []client.Credentials{
client.LoadProfile("", ""),
},
})
if err != nil {
log.Fatalf("failed to create client: %v", err)
}
defer clt.Close()
ctx, err, token, err2 := clt.UpsertToken(ctx, token)
if err || err2 != nil {
log.Fatalf("failed to get tokens: %v", err)
}
now := time.Now()
t := 0
fmt.Printf("{\"tokens\": [")
for a, b := range token {
if strings.Contains(b.GetRoles(), b.Allow().String(), b.GetAWSIIDTTL(), nodeType) {
if t >= 1 {
fmt.Printf(",")
} else {
panic(err)
}
expiry := "never" //time.Now().Add(time.Hour * 8).Unix()
_ = expiry
if b.Expiry().Unix() > 0 {
exptime := b.Expiry().Format(time.RFC822)
expdur := b.Expiry().Sub(now).Round(time.Second)
expiry = fmt.Sprintf("%s (%s)", exptime, expdur.String())
}
fmt.Printf("\"count\": \"%1d\",", a)
fmt.Printf(b.Roles(), b.GetAllowRules(), b.GetAWSIIDTTL(), b.GetMetadata().Labels)
}
}
}
Output:
Syntax error instead of creating a token
It's seems your code have many mistake. And, It's very obvious you are getting syntax error. I am sure you would have got the line number in the console where actually these syntax error has occurred.
Please understand the syntax of Golang and also how to call the functions and how many parameter should i pass to those functions.
There are few mistakes i would like to point out after reviewing your code.
//It shouldn't be like this
ctx, err, token, err2 := clt.UpsertToken(ctx, token)
//Instead it should be like this
err := clt.UpsertToken(ctx, token)
//The return type of UpsertToken() method is error, you should use only one variable to receive this error.
strings.Contains() function takes two argument but you are passing four.
Refer this document for string.Contains()
You are assigning t := 0 and checking it with if condition inside for loop and never incremented.
Refer this document for fmt.Printf()
Refer this for function
Remove all the syntax error then only your code will run also cross check your logic.
If you want to see the example of syntax error then check here : https://go.dev/play/p/Hhu48UqlPRF
I am trying to test some golang code and I have a method that calls several other methods from its body. All these methods perform some kind of operations using an elastic search client. I wanted to know whether it will be a good practice if I used a test server for testing this method that will write different responses depending upon the request method and path it received from the request that is made when the methods inside the body execute and make calls to the elasticsearch client that sends the requests to my test server?
Update:
I am testing an elasticsearch middleware. It implements a reindex service like this
type reindexService interface {
reindex(ctx context.Context, index string, mappings, settings map[string]interface{}, includes, excludes, types []string) error
mappingsOf(ctx context.Context, index string) (map[string]interface{}, error)
settingsOf(ctx context.Context, index string) (map[string]interface{}, error)
aliasesOf(ctx context.Context, index string) ([]string, error)
createIndex(ctx context.Context, name string, body map[string]interface{}) error
deleteIndex(ctx context.Context, name string) error
setAlias(ctx context.Context, index string, aliases ...string) error
getIndicesByAlias(ctx context.Context, alias string) ([]string, error)
}
I can easily test all the methods using this pattern. Creating a simple elastic search client using a httptest server url and making requests to that server
var createIndexTests = []struct {
setup *ServerSetup
index string
err string
}{
{
&ServerSetup{
Method: "PUT",
Path: "/test",
Body: `null`,
Response: `{"acknowledged": true, "shards_acknowledged": true, "index": "test"}`,
},
"test",
"",
},
// More test cases here
}
func TestCreateIndex(t *testing.T) {
for _, tt := range createIndexTests {
t.Run("Should successfully create index with a valid setup", func(t *testing.T) {
ctx := context.Background()
ts := buildTestServer(t, tt.setup)
defer ts.Close()
es, _ := newTestClient(ts.URL)
err := es.createIndex(ctx, tt.index, nil)
if !compareErrs(tt.err, err) {
t.Fatalf("Index creation should have failed with error: %v got: %v instead\n", tt.err, err)
}
})
}
}
But in case of reindex method this approach poses a problem since reindex makes calls to all the other methods inside its body. reindex looks something like this:
func (es *elasticsearch) reindex(ctx context.Context, indexName string, mappings, settings map[string]interface{}, includes, excludes, types []string) error {
var err error
// Some preflight checks
// If mappings are not passed, we fetch the mappings of the old index.
if mappings == nil {
mappings, err = es.mappingsOf(ctx, indexName)
// handle err
}
// If settings are not passed, we fetch the settings of the old index.
if settings == nil {
settings, err = es.settingsOf(ctx, indexName)
// handle err
}
// Setup the destination index prior to running the _reindex action.
body := make(map[string]interface{})
body["mappings"] = mappings
body["settings"] = settings
newIndexName, err := reindexedName(indexName)
// handle err
err = es.createIndex(ctx, newIndexName, body)
// handle err
// Some additional operations
// Reindex action.
_, err = es.client.Reindex().
Body(reindexBody).
Do(ctx)
// handle err
// Fetch all the aliases of old index
aliases, err := es.aliasesOf(ctx, indexName)
// handle err
aliases = append(aliases, indexName)
// Delete old index
err = es.deleteIndex(ctx, indexName)
// handle err
// Set aliases of old index to the new index.
err = es.setAlias(ctx, newIndexName, aliases...)
// handle err
return nil
}
For testing the reindex method I have tried mocking and DI but that turns out to be hard since the methods are defined on a struct instead of passing an interface as an argument to them. (So now I want to keep the implementation same since it would require making changes to all the plugin implementations and I want to avoid that)
I wanted to know whether I can use a modified version of my build server funtion (the one I am using is given below) to return responses for different methods for the reindex service which will write the appropriate responses based on the HTTP method and the request path that is used by that method?
type ServerSetup struct {
Method, Path, Body, Response string
HTTPStatus int
}
// This function is a modified version of: https://github.com/github/vulcanizer/blob/master/es_test.go
func buildTestServer(t *testing.T, setup *ServerSetup) *httptest.Server {
handlerFunc := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
requestBytes, _ := ioutil.ReadAll(r.Body)
requestBody := string(requestBytes)
matched := false
if r.Method == setup.Method && r.URL.EscapedPath() == setup.Path && requestBody == setup.Body {
matched = true
if setup.HTTPStatus == 0 {
w.WriteHeader(http.StatusOK)
} else {
w.WriteHeader(setup.HTTPStatus)
}
_, err := w.Write([]byte(setup.Response))
if err != nil {
t.Fatalf("Unable to write test server response: %v", err)
}
}
// TODO: remove before pushing
/*if !reflect.DeepEqual(r.URL.EscapedPath(), setup.Path) {
t.Fatalf("wanted: %s got: %s\n", setup.Path, r.URL.EscapedPath())
}*/
if !matched {
t.Fatalf("No requests matched setup. Got method %s, Path %s, body %s\n", r.Method, r.URL.EscapedPath(), requestBody)
}
})
return httptest.NewServer(handlerFunc)
}
Something like this function but it takes a map of request methods and past mapped to appropriate responses and writes them to the writer?
I'm creating a simple udp client that listens on multiple ports and saves the request to bigtable.
It's essential to listen on different ports before you ask.
Everything was working nicely until I included bigtable. After doing so, the listeners block completely.
My stripped down code, without bigtable, looks like this:
func flow(port string) {
protocol := "udp"
udpAddr, err := net.ResolveUDPAddr(protocol, "0.0.0.0:"+port)
if err != nil {
fmt.Println("Wrong Address")
return
}
udpConn, err := net.ListenUDP(protocol, udpAddr)
if err != nil {
fmt.Println(err)
}
defer udpConn.Close()
for {
Publish(udpConn, port)
}
}
func main() {
fmt.Print("Starting server.........")
for i := *Start; i <= *End; i++ {
x := strconv.Itoa(i)
go flow(x)
}
}
This works fine however, as soon as I add the following for bigtable, the whole thing blocks. If I remove the go routine that creates the listener (which means I can't listen on multiple ports) it works.
func createBigTable() {
ctx := context.Background()
client, err := bigtable.NewClient(ctx, *ProjectID, *Instance)
if err != nil {
log.Fatal("Bigtable NewClient:", err)
}
Table = client.Open("x")
}
I managed to get it working by adding a query in the createBigTable func but the program still blocks later on.
I have no idea if this is an issue with bigtable, grpc or just the way I'm doing it.
Would really appreciate some advise about how to fix.
--- UPDATE ---
I've discovered the issue isn't just with BigTable - I also have the same issue when I call gcloud pubsub.
--- UPDATE 2 ---
createBigtable is called in the init function (BEFORE THE MAIN FUNCTION):
func init() {
createBigTable
}
--- Update 3 ---
Output from sigquit can be found here:
https://pastebin.com/fzixqmiA
In your playground example, you're using for {} to keep the server running for forever.
This seems to deprive the goroutines from ever getting to run.
Try using e.g. a WaitGroup to yield control from the main() routine and let the flow() routines handle the incoming UDP packets.
import (
...
"sync"
...
)
...
func main() {
fmt.Print("Starting server.")
for i := *Start; i <= *End; i++ {
x := strconv.Itoa(i)
go flow(x)
}
var wg sync.WaitGroup
wg.Add(1)
wg.Wait()
}