Why use grpclog package - go

I'm looking at the grpc example for go https://grpc.io/docs/tutorials/basic/go.html
I’m wondering whats the purpose of grpclog package? the example client/server code uses grpclog.Printf, grpclog.Fatalf. why not just use fmt.Printf & log.Fatalf?

This package forces the logger to only go to verbose level 2 per grpclog
// All logs in transport package only go to verbose level 2.
// All logs in other packages in grpc are logged in spite of the verbosity level.

Related

connect to websocket with proxy in Deno

I want to connect to a Websocket with a Proxy and all of that using Deno. But sadly, I cannot find any Deno module that supports that. I even looked in the Nodejs websockets module and how it's handled there. But I don't really understand the standard deno websocket module, so I can't figure out how to implement proxies. Can someone help me?
Edit:
I found this:
import { createRequire } from "https://deno.land/std/node/module.ts";
const require = createRequire(import.meta.url);
Which lets me import node modules. So i can hopefully just use the node websocket module in Deno. I will update my Question if I can get it to work.
I'm not sure if it would work for WebSockets, it's not explicitly mentioned, but Deno supports proxies via environment variables, see https://deno.land/manual#v1.25.1/linking_to_external_code/proxies
If you try it out I would be interested in knowing if it worked or not.
Something else (instead of your createRequire import) you can do since Deno version 1.25 is directly importing npm packages like this:
import * as <whatever> from 'npm:<packagename>'
although this is currently flagged as unstable and therefore requires you to run your code with --unstable.

golang gofiber framework demo in k8s with Datadog APM integration --how to add tracer and profiler?

I have a small proof-of-concept project to add DataDog APM/tracing capabilities to a gofiber (https://github.com/gofiber) web app. The app is up and running in an EKS environment which already has strong DataDog integration (agent, APM enabled for entire cluster, etc).
I am still learning the ropes with gofiber. My question is, what is the simplest and most efficient way to add the tracer and profile to my project?
DataDog is recommending these two packages:
go get gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer
go get gopkg.in/DataDog/dd-trace-go.v1/profiler
Currently I have a simple main.go file serving "Hello World" at /, using one of the gofiber recipes.
Can I add the tracer and profile as separate functions in the same file or should I have separate files for these in my project?
Definitely trying to avoid running an entirely separate container in my pod for this tracing capability. Thanks for any advice or suggestions.
You need to add datadog tracer in main.go and as fiber middleware, to trace fiber framework requests.
Refer to below examples to enable datadog tracing for fiber apps.
main.go example
package main
import (
"gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer"
)
func main() {
tracer.Start(tracer.WithAgentAddr("localhost:8200")
tracer.WithService("APP NAME")
tracer.WithEnv("TRACE ENV")
defer tracer.Stop()
}
Fiber middleware example
import (
"github.com/gofiber/fiber/v2"
fibertrace "gopkg.in/DataDog/dd-trace-go.v1/contrib/gofiber/fiber.v2"
)
func main() {
app := fiber.New()
app.Use(fibertrace.Middleware(fibertrace.WithServiceName("APP Name value")))
}

How to add kube-config and use go-client in our go program

What the role of kube-Config and how to use this in our go program for simple operations?
There is an example in client-go library you can use for learning. You can check the source, i am not including it here, as it a bit heavy.
But in general:
You read config from kubeconfig using kubernetes.NewForConfig
And using clientset returned by the previous command your app performing API requests.

Logs from GCF in Go doesn't contains log level

I am trying send info/error logs to StackDriver Logging on GCP from Cloud Function written in Go, however all logs dosen't have log level assignment.
I have created function from https://github.com/GoogleCloudPlatform/golang-samples/blob/master/functions/helloworld/hello_logging.go
to demonstrate problem.
Cloud Support here! What you are trying to do is not possible, as specified in the documentation:
Logs to stdout or stderr do not have an associated log level.
Internal system messages have the DEBUG log level.
What you probably need is to use Logging API, specifically the Log Levels section.
If this is not working for you either you could try using node.js instead of go as it currently emits logs to INFO and ERROR levels using console.log() and console.error() respectively.
I created a package to do exactly this:
github.com/ncruces/go-gcp/glog
It works on App Engine, Kubernetes Engine, Cloud Run, and Cloud Functions. Supports setting logging levels, request/tracing metadata, and structured logging.
Usage:
func HelloWorld(w http.ResponseWriter, r *http.Request) {
glog.Infoln("Hello logs")
glog.Errorln("Hello logs")
// or, to set request metadata:
logger := glog.ForRequest(r)
logger.Infoln("Hello logs")
}
you can use external library which allow you to set the log level. Here an example of what I use.
Use logrus for setting the minimal log level (you can improve this code by providing log level in env var) and joonix for fluentd formatter.(line 25)
A point of attention. Line 11, I rename the logrus package log
log "github.com/sirupsen/logrus"
Thus, don't use log standard library, but logrus library. It's sometime boring... you can simply replace log by logrus for avoiding all confusion.
Joonix is a fluentD formatter for transforming the logs into an ingestable format for Stackdriver.
I, too, took a stab at creating a package for this. If you use logrus (https://github.com/sirupsen/logrus), then this may be helpful:
https://github.com/tekkamanendless/gcfhook
I'm using this in production, and it's working just fine.
I recommend the https://github.com/apsystole/log. It's also log- and logrus-compatible, but a small zero-dependency module unlike the libraries used in two existing answers which bring 400+ modules as their dependencies (gasp... I am simply looking at go mod graph).

protoc command not generating all base classes (java)

I have been trying to generate the basic gRPC client and server interfaces from a .proto service definition here from the grpc official repo.
The relevant service defined in that file (from the link above) is below:
service RouteGuide {
rpc GetFeature(Point) returns (Feature) {}
rpc ListFeatures(Rectangle) returns (stream Feature) {}
rpc RecordRoute(stream Point) returns (RouteSummary) {}
rpc RouteChat(stream RouteNote) returns (stream RouteNote) {}
}
The command I run is protoc --java_out=${OUTPUT_DIR} path/to/proto/file
According to the grpc site (specifically here), a RouteGuideGrpc.java which contains a base class RouteGuideGrpc.RouteGuideImplBase, with all the methods defined in the RouteGuide service is supposed to have been generated from the protoc command above, but that file does not get generated for me.
Has anyone faced similar issues? Is the official documentation simply incorrect? And would anyone have any suggestion as to what I can do to generate that missing class?
This may help someone else in the future so I'll answer my own question.
I believe the java documentation for gRPC code generation is not fully up to date and the information is scattered amongst different official repositories.
So turns out that in order to generate all the gRPC java service base classes as expected, you need to specify an additional flag to the protoc cli like so grpc-java_out=${OUTPUT_DIR}. But in order for that additional flag to work, you need to have a few extra things:
The binary for the protoc plugin for gRPC Java protoc-gen-grpc-java: you can get the relevant one for your system from maven central here (the link is for v1.17.1). If there isn't a prebuilt binary available for your system, you can compile one yourself from the github repo instructions here.
Make sure the binary location is added to your PATH environment variable and the binary is renamed to "protoc-gen-grpc-java" exactly (that is the name the protoc cli expects to have in the path).
Finally, you are ready to run the correct command protoc --java_out=${OUTPUT_DIR} --grpc-java_out=${OUTPUT_DIR} path/to/proto/file and now the service base classes like RouteGuideGrpc.RouteGuideImplBase should be generated when it previously was not.
I hope this explanation helps someone else out in the future.
Thank you very much for this investigation. Indeed, the doc is incomplete, and people use Maven to compile everything without understanding of how it really works. Yp

Resources