'remote write receiver' HTTP API request in Prometheus - go

I am trying to find a working example of how to use the remote write receiver in Prometheus.
Link : https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver
I am able to send a request to the endpoint ( POST /api/v1/write ) and can authenticate with the server. However, I have no idea in what format I need to send the data over.
The official documentation says that the data needs to be in Protobuf format and snappy encoded. I know the libraries for them. I have a few metrics i need to send over to prometheus http:localhost:1234/api/v1/write.
The metrics i am trying to export are scraped from a metrics endpoint (http://127.0.0.1:9187/metrics ) and looks like this :
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.11e-05
go_gc_duration_seconds{quantile="0.25"} 2.4039e-05
go_gc_duration_seconds{quantile="0.5"} 3.4507e-05
go_gc_duration_seconds{quantile="0.75"} 5.7043e-05
go_gc_duration_seconds{quantile="1"} 0.002476999
go_gc_duration_seconds_sum 0.104596342
go_gc_duration_seconds_count 1629
As of now, i can authenticate with my server via a POST request in Golang.

Please note that it isn't recommended to send application data to Prometheus via remote_write protocol, since Prometheus is designed to scrape metrics from the targets specified in Prometheus config. This is known as pull model, while you are trying to push metrics to Prometheus aka push model.
If you need pushing application metrics to Prometheus, then the following options exist:
Pushing metrics to pushgateway. Please read when to use the pushgateway before using it.
Pushing metrics to statsd_exporter.
Pushing application metrics to VictoriaMetrics (this is an alternative Prometheus-like monitoring system) via any supported text-based data ingestion protocol:
Prometheus text exposition format
Graphite
Influx line protocol
OpenTSDB
DataDog
JSON
CSV
All these protocols are much easier to implement and debug comparing to Prometheus remote_write protocol, since these protocols are text-based, while Prometheus remote_write protocol is a binary protocol (basically, it is snappy-compressed protobuf messages sent over HTTP).

Related

SpringBoot + Tracing/Traces: Non intrusive mechanism to collect/scrap/poll application traces (as in traceId, spanId, parentId)

Small question regarding SpringBoot web applications and tracing please.
By tracing, I mean traceId,spanId,parentId. Most of all, how to collect/scrap/poll those traces.
For example, logging:
SpringBoot can send logs to external systems (send log over the wire) to Loki, Logstash (just to name a few). But this construct requires the web application to know the destinations, and to send the logs there.
An alternative to this construct, which is non invasive, is to just write the logs in a log file, and let some logs forwarder, for instance FileBeat, Splunk forwarder, to collect/scrap/poll the logs and those will send to the destinations, without having the web application know anything about them.
Another example, metrics.
SpringBoot can send metrics to different metrics backend, such as Elastic, DataDog (just to name a few) using the micrometer-registry-abc. But again, the web application here needs to know about the destination.
An alternative to this construct is for instance to expose the /metrics endpoint, or the /prometheus endpoint, and have something like MetricBeat, Prometheus agent collect/scrap/poll those metrics. Here again, the web application does not need to know anything about the destinations, it is non obtrusive at all.
My question is when it comes to traces. As in traceId,spanId,parentId.
SpringBoot can send traces to Zipkin server, which is very popular.
However, it seems there is no construct to collect/scrap/poll traces.
send/push
collect/scrap/poll
logging
yes (TCP Logstash/ Loki)
yes (FileBeat/Splunk Forwarder)
metrics
yes (micrometer-registry-X)
yes (prometheus agent/ MetricBeat
traces
yes (zipkin)
?
Question, what is the best way to have SpringBoot generated traces collected/scraped/polled in a non invasive way please?
Thank you

How does the serverless datadog forwarder encrypt/encode their logs?

I am having trouble figuring out how the datadog forward encodes/encrypts its messages from the datadog forwarder. We are utilizing the forwarder on datadog using the following documentation: https://docs.datadoghq.com/serverless/forwarder/ . On that page there, Datadog has an option to send the same event to another lambda that it invokes via the AdditionalTargetLambdaARNs flag. We are doing this and having the other lambda invoke but the event input that we are getting is long string that looks like it is base64 encoded but when I put it into a base64 decoder, I get gibberish back. I was wondering if anyone knew how datadog is compressing/encoding/encrypting their data/logs that they send so that I can read the logs in my lambda and be able to preform actions off of the data being forwarded? I have been searching google and the datadog site for documentation on this but I can't find any.
It looks like Datadog uses zstd compression in order to compress its data before sending it: https://github.com/DataDog/datadog-agent/blob/972c4caf3e6bc7fa877c4a761122aef88e748b48/pkg/util/compression/zlib.go

Prometheus Exporter - Filtering targets

I'm in the process of writing a Prometheus Exporter in Go to expose metrics pushed from AIX severs. The AIX servers push their metrics (in json) to a central listener (the exporter program) that converts them to standard Prometheus metrics and exposes them for scraping.
The issue I have is that the hostname for the metrics is extracted from the pushed json. I store this as a label in each metric. E.g. njmon_memory_free{lpar="myhostname"}. While this works, it's less than ideal as there doesn't seem to be a way to relabel this to the usual instance label (njmon_memory_free{instance="myhostname"}. The Prometheus relabelling happens before the scrape so the lpar label isn't there to be relabelled.
One option seems to be to rewrite the exporter so that the Prometheus server probes defined targets, each target being the lpar. In order for that to work, I'd need a means to filter the stored metrics by lpar so only metrics relating to the target/lpar are returned. Is this a practical solution or am I forced to create a dedicated listener or url for every lpar?
So I'm fixing my answer given in comments, due it was helpfull to author.
Use "instance" label in exporter, not "lpar" (change exporter code)
Use "honor_labels: true" in Prometheus scrape_config

Consume protobuf messages from Graphite

I'd like to know if I can send data to Graphite using protobuf.
I have an application that sends statistics in protobuf format and I want to start sending those statistics to Graphite.
I searched in google and I just found this https://graphite.readthedocs.io/en/latest/search.html?q=protobuf&check_keywords=yes&area=default# but it's not clear if it's only for graphite internal core usage.
Thanks!
Yes you can, think it is available since version 1.x and up.
See for an example in Python:
https://github.com/graphite-project/carbon/blob/master/examples/example-protobuf-client.py
You will have to enable the listener in the Carbon configuration:
https://github.com/graphite-project/carbon/blob/master/conf/carbon.conf.example#L113-L117

how to send custom metrics from Jmeter to InfluxDB

We are creating custom metrics in JMeter using beanshell scripting and saving them to a file.Our requirement is to send this metrics to InfluxDB. We tried using Backend Listener with Graphite and InfluxDB implementation client but couldn't send the custom values. Only the default Jmeter metrics are being passed.
Has anyone done this before, can you guide us to resolve this issues.
We are using Jmeter 3.3 and influxdb-1.4.2-1
Thanks,
BB
Two words: line protocol.
Another two words: custom listener (Beanshell/JSR223 with Groovy).
Marry them, and you'll have what you want.
I did that work once, and it didn't take long.
There may be other options (like, take this result file and feed it to script that shapes it to the same line protocol, but post-execution, not live) - but the one I suggest is the simplest.
To do it you can use /write endpoint as it described in influxdb.com
Image below shows how it can be done in Jmeter using "HTTP Request" sampler.
How to send custom data to influxDB:
In a DB it will looks like on image below:

Resources