I'm trying to send logs to the APM server using the uber zap logging library.
I've tried their instrumentation module (https://www.elastic.co/guide/en/apm/agent/go/1.x/builtin-modules.html#builtin-modules-apmzap) to do so but it's not working for me:
The transactions are not being sent to the APM server.
envs:
ELASTIC_APM_LOG_FILE=stderr=stderr
ELASTIC_APM_LOG_LEVEL=debug
ELASTIC_APM_SERVICE_NAME=service-name
ELASTIC_APM_SERVER_URL=http://localhost:8200
import (
"go.uber.org/zap"
"go.elastic.co/apm/module/apmzap"
)
// apmzap.Core.WrapCore will wrap the core created by zap.NewExample
// such that logs are also sent to the apmzap.Core.
//
// apmzap.Core will send "error", "panic", and "fatal" level log
// messages to Elastic APM.
var logger = zap.NewExample(zap.WrapCore((&apmzap.Core{}).WrapCore))
func handleRequest(w http.ResponseWriter, req *http.Request) {
// apmzap.TraceContext extracts the transaction and span (if any)
// from the given context, and returns zap.Fields containing the
// trace, transaction, and span IDs.
traceContextFields := apmzap.TraceContext(req.Context())
logger.With(traceContextFields...).Debug("handling request")
logger.With(traceContextFields...).Error("handling error")
}
Updated APM stack to 7.15. working perfectly.
Related
I'm trying to implement a google cloud function to test Google Logging client library. below is my code
// Package p contains an HTTP Cloud Function.
package loggingclient
import (
"cloud.google.com/go/logging"
"net/http"
"context"
"fmt"
)
// HelloWorld prints the JSON encoded "message" field in the body
// of the request or "Hello, World!" if there isn't one.
func HelloWorld(w http.ResponseWriter, r *http.Request) {
label := map[string]string{"priority": "High"}
var projectName = "my-project-id"
ctx := context.Background()
client, err := logging.NewClient(ctx, projectName)
if err != nil {
fmt.Printf("client not created: %v", err)
}
lg := client.Logger("MY-LOGGER")
lg.Log(logging.Entry{
Payload: "Hello, This is error!!",
Severity: logging.Error,
Labels: label,
})
client.Close()
}
Here, I'm expecting a log entry with a message:"Hello, This is error!!" and with a lable:"priority": "High" and severirty "ERROR"
But actually, when I trigger this Cloud Function, I didn't get any new log entries. Therefore don't client logging libraries work inside cloud functions?, How to resolve this?
Thanks
It works on cloud functions. I have done the exact same thing in a cloud function before. You can use google's official documenation with cloud function logging here
Also ensure that the service account have necessary permissions for logging
https://cloud.google.com/logging/docs/access-control
I have a server-side streaming RPC hosted on Google Cloud Run.
With the following proto definition:
syntax = "proto3";
package test.v1;
service MyService {
// Subscribe to a stream of events.
rpc Subscribe (SubscribeRequest) returns (stream SubscribeResponse) {}
}
message SubscribeRequest {
}
message SubscribeResponse {
}
Using BloomRPC/grpcurl, when I stop the method I get a stream.Context().Done() event which I can use to gracefully stop certain tasks. Here is an example of the Suscribe method:
func (s *myService) Subscribe(req *pb.SubscribeRequest, stream pb.Instruments_SubscribeServer) error {
// Create a channel for this client.
ch := make(chan *pb.SubscribeResponse)
// Add the channel object 'ch' to a Global list of channels where we have a 'broadcaster' sending
// messages to all connected clients.
// TODO: pass to broadcaster client list.
for {
select {
case <-stream.Context().Done():
close(ch)
fmt.Println("Removed client from global list of channels")
return nil
case res := <-ch:
_ = stream.Send(res)
}
}
}
On the client side, when I test the service locally (i.e. running a local gRPC server in Golang), using BloomRPC/grpcurl I get a message on the stream.Context().Done() channel whenever I stop the BloomRPC/grpcurl connection. This is the expected behaviour.
However, running the exact same code on Cloud Run in the same way (via BloomRPC/grpcurl), I don't get a stream.Context().Done() message - any reason why this would be different on Google Cloud Run? Looking at the Cloud Run logs, a call to the Subscribe method essentially 'hangs' until the request reaches its timeout.
I needed to enable HTTP/2 Connections in Cloud Run for this to work.
I am using logrus to do all the logging my golang application. However, I also want to integrate this with Elastic Search such that all the logs are also flushed to elastic search when I create a logrus log entry. Currently all logs are created in a file as shown in the snippet below. How could I integrate with elastic search?
type LoggerConfig struct {
Filename string `validate:regexp=.log$`
AppName string `validate:regexp=^[a-zA-Z]$`
}
type AppLogger struct {
Err error
Logger logrus.Entry
}
func Logger(loggerConfig LoggerConfig) AppLogger {
response := new(AppLogger)
// validate the schema of the logger_config
if errs := validator.Validate(loggerConfig); errs != nil {
response.Err = errs
// this sets up the error on the the response struct
}
logrus.SetFormatter(&logrus.JSONFormatter{})
f, err := os.OpenFile(loggerConfig.Filename, os.O_WRONLY|os.O_CREATE, 0755)
if err != nil {
response.Err = err
}
multipleWriter := io.MultiWriter(os.Stdout, f)
logrus.SetOutput(multipleWriter)
contextLogger := logrus.WithFields(logrus.Fields{
"app": loggerConfig.AppName,
})
//logrus.AddHook(hook)
response.Logger = *contextLogger
//response.Logger.Info("adele")
return *response
}
I had tried elogrus which adds a hook but I am not sure how to use it. Here is the method which attempts to create elastic search client. How could I integrate this with logrus instance?
func prepareElasticSearchClient() *elastic.Client {
indexName := "my-server"
client, _ := elastic.NewClientFromConfig(&config.Config{
URL: os.Getenv("ELASTIC_SEARCH_URL_LOGS") + ":" + os.Getenv("ELASTIC_SEARCH_PORT_LOGS"),
Index: indexName,
Username: os.Getenv("ELASTIC_SEARCH_USERNAME_LOGS"),
Password: os.Getenv("ELASTIC_SEARCH_PASSWORD_LOGS"),
})
return client
}
Earlier I have used modules like Winston where it was super easy to setup elastic search logging but somehow, I find little documentation with golang on how to integrate Golang logging with elastic search
With elogrus you first create Elastic client and pass it to elogrus hook when creating it with elogrus.NewAsyncElasticHook(). Hook just wraps sending message to Elastic. Then you add this hook to logrus log. Every time you log message using log it will fire your hook and send message (if log level filter passes) to Elastic.
log := logrus.New()
client, err := elastic.NewClient(elastic.SetURL("http://localhost:9200"))
// ... handle err
hook, err := elogrus.NewAsyncElasticHook(client, "localhost", logrus.DebugLevel, "testlog")
// ... handle err
log.Hooks.Add(hook)
Signature of NewAsyncElasticHook is (client *elastic.Client, host string, level logrus.Level, index string) where:
client is pointer to Elastic.Client you obtained before using elastic
host is string that denotes from which host you are sending the log trace (it's a string - hostname of host where program that logs is running)
level is the maximum logrus log level you want messages to be sent (e.g. if you want to see DEBUG messages locally but only send only ERROR and below to Elastic)
index is name of Elastic Search index you want to add messages from log to
From here you can use log normally as you would do with logrus and all messages will get passed to Elastic.
Another part of issue was a bit more tricky and rooted in (not only) Golang elastic client node sniffing behavior. We debugged it in chat and summary was posted as my answer to OP's another question regarding that: Cannot connect to elastic search : no active connection found: no Elasticsearch node available
I have a list of metrics in json format to be send to prometheus.How would I use Guage metrics type in client_golang to send these metrics to prometheus all at once?
Right now I have below code
var (
dockerVer = prometheus.NewGauge(prometheus.GaugeOpts{
Name: "docker_version_latency",
Help: "Latency of docker version command.",
}))
func init() {
// Metrics have to be registered to be exposed:
prometheus.MustRegister(dockerVer)
}
func main() {
for {
get_json_response(1234,"version")
dockerVer.Set(jsonData[0].Latency)
// The Handler function provides a default handler to expose metrics
// via an HTTP server. "/metrics" is the usual endpoint for that.
http.Handle("/metrics", promhttp.Handler())
log.Fatal(http.ListenAndServe(":8081", nil))
}
}
I have many more metrics and I have to read these from the json and send it to gauge dynamically.
You are looking to write a custom collector as part of an exporter, see https://github.com/prometheus/consul_exporter/blob/master/consul_exporter.go#L156 as one example.
Docker also has Prometheus metrics built in that can be enabled, so you may not need to write this.
My Web Server is Coded in Golang and supports HTTPS. I wish to leverage HTTP/2 Server Push features in the Web Server. The following Link explains how to convert HTTP Server to Support HTTP/2 :-
https://www.ianlewis.org/en/http2-and-go
However, it is not clear how to implement the Server Push notifications in Golang.
- How should I add the Server Push functionality ?
- How do I control, or manage, the documents and files to be Pushed ?
Go 1.7 and older do not support HTTP/2 server push in the standard library. Support for server push will be added in the upcoming 1.8 release (see the release notes, expected release is February).
With Go 1.8 you can use the new http.Pusher interface, which is implemented by net/http's default ResponseWriter. Pushers Push method returns ErrNotSupported, if server push is not supported (HTTP/1) or not allowed (the client has disabled server push).
Example:
package main
import (
"io"
"log"
"net/http"
)
func main() {
http.HandleFunc("/pushed", func(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, "hello server push")
})
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
if pusher, ok := w.(http.Pusher); ok {
if err := pusher.Push("/pushed", nil); err != nil {
log.Println("push failed")
}
}
io.WriteString(w, "hello world")
})
http.ListenAndServeTLS(":443", "server.crt", "server.key", nil)
}
If you want to use server push with Go 1.7 or older use can use the golang.org/x/net/http2 and write the frames directly.
As mentioned in other answers, you can make use of Go 1.8 feature (cast the writer to http.Pusher and then use the Push method).
That comes with a caveat: you must be serving the HTTP2 traffic right from your server.
If you're behind a proxy like NGINX, this might not work. If you want to consider that scenario, you can make use of the Link header to advertise the URLs to be pushed.
// In the case of HTTP1.1 we make use of the `Link` header
// to indicate that the client (in our case, NGINX) should
// retrieve a certain URL.
//
// See more at https://www.w3.org/TR/preload/#server-push-http-2.
func handleIndex(w http.ResponseWriter, r *http.Request) {
var err error
if *http2 {
pusher, ok := w.(http.Pusher)
if ok {
must(pusher.Push("/image.svg", nil))
}
} else {
// This ends up taking the effect of a server push
// when interacting directly with NGINX.
w.Header().Add("Link",
"</image.svg>; rel=preload; as=image")
}
w.Header().Add("Content-Type", "text/html")
_, err = w.Write(assets.Index)
must(err)
}
ps.: I wrote more about this here https://ops.tips/blog/nginx-http2-server-push/ if you're interested.