I have a list of metrics in json format to be send to prometheus.How would I use Guage metrics type in client_golang to send these metrics to prometheus all at once?
Right now I have below code
var (
dockerVer = prometheus.NewGauge(prometheus.GaugeOpts{
Name: "docker_version_latency",
Help: "Latency of docker version command.",
}))
func init() {
// Metrics have to be registered to be exposed:
prometheus.MustRegister(dockerVer)
}
func main() {
for {
get_json_response(1234,"version")
dockerVer.Set(jsonData[0].Latency)
// The Handler function provides a default handler to expose metrics
// via an HTTP server. "/metrics" is the usual endpoint for that.
http.Handle("/metrics", promhttp.Handler())
log.Fatal(http.ListenAndServe(":8081", nil))
}
}
I have many more metrics and I have to read these from the json and send it to gauge dynamically.
You are looking to write a custom collector as part of an exporter, see https://github.com/prometheus/consul_exporter/blob/master/consul_exporter.go#L156 as one example.
Docker also has Prometheus metrics built in that can be enabled, so you may not need to write this.
Related
I'm trying to send logs to the APM server using the uber zap logging library.
I've tried their instrumentation module (https://www.elastic.co/guide/en/apm/agent/go/1.x/builtin-modules.html#builtin-modules-apmzap) to do so but it's not working for me:
The transactions are not being sent to the APM server.
envs:
ELASTIC_APM_LOG_FILE=stderr=stderr
ELASTIC_APM_LOG_LEVEL=debug
ELASTIC_APM_SERVICE_NAME=service-name
ELASTIC_APM_SERVER_URL=http://localhost:8200
import (
"go.uber.org/zap"
"go.elastic.co/apm/module/apmzap"
)
// apmzap.Core.WrapCore will wrap the core created by zap.NewExample
// such that logs are also sent to the apmzap.Core.
//
// apmzap.Core will send "error", "panic", and "fatal" level log
// messages to Elastic APM.
var logger = zap.NewExample(zap.WrapCore((&apmzap.Core{}).WrapCore))
func handleRequest(w http.ResponseWriter, req *http.Request) {
// apmzap.TraceContext extracts the transaction and span (if any)
// from the given context, and returns zap.Fields containing the
// trace, transaction, and span IDs.
traceContextFields := apmzap.TraceContext(req.Context())
logger.With(traceContextFields...).Debug("handling request")
logger.With(traceContextFields...).Error("handling error")
}
Updated APM stack to 7.15. working perfectly.
I have a server-side streaming RPC hosted on Google Cloud Run.
With the following proto definition:
syntax = "proto3";
package test.v1;
service MyService {
// Subscribe to a stream of events.
rpc Subscribe (SubscribeRequest) returns (stream SubscribeResponse) {}
}
message SubscribeRequest {
}
message SubscribeResponse {
}
Using BloomRPC/grpcurl, when I stop the method I get a stream.Context().Done() event which I can use to gracefully stop certain tasks. Here is an example of the Suscribe method:
func (s *myService) Subscribe(req *pb.SubscribeRequest, stream pb.Instruments_SubscribeServer) error {
// Create a channel for this client.
ch := make(chan *pb.SubscribeResponse)
// Add the channel object 'ch' to a Global list of channels where we have a 'broadcaster' sending
// messages to all connected clients.
// TODO: pass to broadcaster client list.
for {
select {
case <-stream.Context().Done():
close(ch)
fmt.Println("Removed client from global list of channels")
return nil
case res := <-ch:
_ = stream.Send(res)
}
}
}
On the client side, when I test the service locally (i.e. running a local gRPC server in Golang), using BloomRPC/grpcurl I get a message on the stream.Context().Done() channel whenever I stop the BloomRPC/grpcurl connection. This is the expected behaviour.
However, running the exact same code on Cloud Run in the same way (via BloomRPC/grpcurl), I don't get a stream.Context().Done() message - any reason why this would be different on Google Cloud Run? Looking at the Cloud Run logs, a call to the Subscribe method essentially 'hangs' until the request reaches its timeout.
I needed to enable HTTP/2 Connections in Cloud Run for this to work.
I have a gin service where one of the endpoint looks like this:
const myPath= "/upload-some-file/:uuid"
In my middleware that sends data to prometheus, I have something like this:
requestCounter = promauto.NewCounterVec(prometheus.CounterOpts{
Name: "all-http-requests",
Help: "Total number of http requests",
}, []string{"Method", "Endpoint"})
func Telemetry() gin.HandlerFunc {
return func(c *gin.Context) {
// Metrics for requests volume
requestCounter.With(prometheus.Labels{"Method": c.Request.Method, "Endpoint": c.Request.URL.Path}).Inc()
c.Next()
}
}
But I notice that that prometheus is unable to figure out that a parameter is actually embedded into the path, therefore it treats every unique uuid as a new path.
Is there some way to let prometheus realize that it is actually using a URL with embedded parameters?
I found this https://github.com/gin-gonic/gin/issues/748#issuecomment-543683781
So I can simply do c.FullPath() to get the matched route.
I would like to try simple example of using Prometheus.
I have downloaded server binaries
I have started simple code, but with few modifications
var addr = flag.String("listen-address", ":8080", "The address to listen on for HTTP requests.")
func main() {
flag.Parse()
http.Handle("/metrics", promhttp.Handler())
http.Handle("/test/{id}", myHandler(promhttp.Handler()))
log.Fatal(http.ListenAndServe(*addr, nil))
}
func myHandler(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "hello, you've hit %s\n", r.URL.Path)
next.ServeHTTP(w, r)
})
}
Questions:
I assume Prometheus is monitoring tool and I would like to monitor endpoints metrics and /test/{id} separately. Did I get the idea correctly by creating several handlers and using promhttp.Handler() as middleware?
What else apart of quantity and latency of requests can be monitored in e.g. simple web app with database?
To follow up on #David Maze's answer, the default handler promhttp.Handler is for reporting metrics. (gathers from all registered handlers and reports them on request).
Unfortunately, it is not a generic http middleware that gives you any metrics out of the box.
I have seen many of go's web frameworks have some sort of community prometheus middleware (gin's) that give metrics out of the box (latency, response code, request counts, etc).
The go prometheus client library has examples of how to add metrics to your application.
var (
httpRequests = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "http_requests_total",
Help: "Number of http requests.",
},
)
)
func init() {
// Metrics have to be registered to be exposed:
prometheus.MustRegister(httpRequests)
}
func myHandler(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
httpRequests.Inc()
fmt.Fprintf(w, "hello, you've hit %s\n", r.URL.Path)
next.ServeHTTP(w, r)
})
}
As for your second question lots :) Google advocates for monitoring 4 golden signals:
https://landing.google.com/sre/book/chapters/monitoring-distributed-systems.html#xref_monitoring_golden-signals
These are
Traffic - Throughput - Counts/Time
Latency - distribution / histogram
Errors - HTTP Response codes/explicit error counts
Saturation - resource queues ie if there is a goroutine pool how many goroutines are active at a given time
In my experie8inces it has also been helpful to have visibility of all the interactions between your appl8ication and your database (ie the 4 golden signals applied to your database):
number of calls made to db from app
latencies of calls made
results (err/success) of your calls made to determine availability (success / total)
saturation available from your db driver (https://golang.org/pkg/database/sql/#DBStats)
I'm currently working on a program written in Go (golang) that is monitored by Prometheus.
Now the program should serve two endpoints /metrics and /service.
When scraped by Prometheus on /metrics, it should expose it's own metrics (e.g. requests made, request latency, ...) and when scraped on /service, it should query an API, get metrics from there and expose them to Prometheus.
For the first part I create e.g. a Counter via
requestCount := kitprometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: "SERVICE",
Subsystem: "service_metrics",
Name: "request_count",
Help: "Number of requests received.",
}, fieldKeys)
and serve the stuff via:
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe(":8090", nil)
for the /service part, I query the API, extract a value and update a different Gauge via Gauge.Set(value)
How do I expose this last Gauge on the different endpoint without
firing up another server (different port)?
Do I have to create my own Collector (I have no custom metrics, so
no, right?)?
You can use prometheus.NewRegistry to create a custom collector, and expose it to some endpoint you want by using promhttp.HandlerFor.
var (
// custom collector
reg = prometheus.NewRegistry()
// some metrics
myGauge = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "gauge_name",
Help: "guage_help",
},
[]string{"l"},
)
)
func init() {
// register metrics to my collector
reg.MustRegister(myGauge)
}
func main() {
// instrument
myGauge.WithLabelValues("l").Set(123)
// expose endpoint
http.Handle("/service", promhttp.HandlerFor(reg, promhttp.HandlerOpts{}))
}