I need to push cpu info to OpenTSDB server using golang.
What is the procedure to send data in golang?
In which package should use the sent Data? (websocket or http)
In which format should I send the data?
What method should I use for pushing the data? (POST OR GET)
You can use https://github.com/shirou/gopsutil package to gather metrics then use http package to push data to your backend using a POST request with a json body. Have a look to this thread for posting data with golang : How do I send a JSON string in a POST request in Go.
Related
I am trying to find a working example of how to use the remote write receiver in Prometheus.
Link : https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver
I am able to send a request to the endpoint ( POST /api/v1/write ) and can authenticate with the server. However, I have no idea in what format I need to send the data over.
The official documentation says that the data needs to be in Protobuf format and snappy encoded. I know the libraries for them. I have a few metrics i need to send over to prometheus http:localhost:1234/api/v1/write.
The metrics i am trying to export are scraped from a metrics endpoint (http://127.0.0.1:9187/metrics ) and looks like this :
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.11e-05
go_gc_duration_seconds{quantile="0.25"} 2.4039e-05
go_gc_duration_seconds{quantile="0.5"} 3.4507e-05
go_gc_duration_seconds{quantile="0.75"} 5.7043e-05
go_gc_duration_seconds{quantile="1"} 0.002476999
go_gc_duration_seconds_sum 0.104596342
go_gc_duration_seconds_count 1629
As of now, i can authenticate with my server via a POST request in Golang.
Please note that it isn't recommended to send application data to Prometheus via remote_write protocol, since Prometheus is designed to scrape metrics from the targets specified in Prometheus config. This is known as pull model, while you are trying to push metrics to Prometheus aka push model.
If you need pushing application metrics to Prometheus, then the following options exist:
Pushing metrics to pushgateway. Please read when to use the pushgateway before using it.
Pushing metrics to statsd_exporter.
Pushing application metrics to VictoriaMetrics (this is an alternative Prometheus-like monitoring system) via any supported text-based data ingestion protocol:
Prometheus text exposition format
Graphite
Influx line protocol
OpenTSDB
DataDog
JSON
CSV
All these protocols are much easier to implement and debug comparing to Prometheus remote_write protocol, since these protocols are text-based, while Prometheus remote_write protocol is a binary protocol (basically, it is snappy-compressed protobuf messages sent over HTTP).
I need to write a client and a server. Client can send different request types as a GoLang struct, and server should recognize the type, and invoke a corresponding handler function. How would you achieve that? I tried to look into gob package, but I don`t see that it can recognize the type it receives from the stream?
One more quest - is gob package the most efficient way to pass messages between client, and server from the low-latency low-memory utilization perspective?
I want to write an RPC service which can uses JSON as RPC payload but it does not follow JSONRPC standards. Also I want to execute pallet's extrinsic function based on JSON DATA received on RPC How to achieve these goals?
I tried to read Moonbeam's code but that seems too complicated to achieve above goals.
I am having trouble figuring out how the datadog forward encodes/encrypts its messages from the datadog forwarder. We are utilizing the forwarder on datadog using the following documentation: https://docs.datadoghq.com/serverless/forwarder/ . On that page there, Datadog has an option to send the same event to another lambda that it invokes via the AdditionalTargetLambdaARNs flag. We are doing this and having the other lambda invoke but the event input that we are getting is long string that looks like it is base64 encoded but when I put it into a base64 decoder, I get gibberish back. I was wondering if anyone knew how datadog is compressing/encoding/encrypting their data/logs that they send so that I can read the logs in my lambda and be able to preform actions off of the data being forwarded? I have been searching google and the datadog site for documentation on this but I can't find any.
It looks like Datadog uses zstd compression in order to compress its data before sending it: https://github.com/DataDog/datadog-agent/blob/972c4caf3e6bc7fa877c4a761122aef88e748b48/pkg/util/compression/zlib.go
I would like to just HTTP POST events into a spout. Do I need to set up a web server myself, or would that be redundant? All of the tutorials that I have seen so far assume that an application will be fetching (or even just generating) the data itself and passing it to emit-spout!.
Storm used a pull based model in Spouts.nextTuple(). Thus, it might be best to have a buffer in between -- a WebServer takes HTTP POST requests and writes into that buffer. A Spout can pull the date from the buffer.