I am having trouble figuring out how the datadog forward encodes/encrypts its messages from the datadog forwarder. We are utilizing the forwarder on datadog using the following documentation: https://docs.datadoghq.com/serverless/forwarder/ . On that page there, Datadog has an option to send the same event to another lambda that it invokes via the AdditionalTargetLambdaARNs flag. We are doing this and having the other lambda invoke but the event input that we are getting is long string that looks like it is base64 encoded but when I put it into a base64 decoder, I get gibberish back. I was wondering if anyone knew how datadog is compressing/encoding/encrypting their data/logs that they send so that I can read the logs in my lambda and be able to preform actions off of the data being forwarded? I have been searching google and the datadog site for documentation on this but I can't find any.
It looks like Datadog uses zstd compression in order to compress its data before sending it: https://github.com/DataDog/datadog-agent/blob/972c4caf3e6bc7fa877c4a761122aef88e748b48/pkg/util/compression/zlib.go
Related
I am trying to find a working example of how to use the remote write receiver in Prometheus.
Link : https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver
I am able to send a request to the endpoint ( POST /api/v1/write ) and can authenticate with the server. However, I have no idea in what format I need to send the data over.
The official documentation says that the data needs to be in Protobuf format and snappy encoded. I know the libraries for them. I have a few metrics i need to send over to prometheus http:localhost:1234/api/v1/write.
The metrics i am trying to export are scraped from a metrics endpoint (http://127.0.0.1:9187/metrics ) and looks like this :
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 1.11e-05
go_gc_duration_seconds{quantile="0.25"} 2.4039e-05
go_gc_duration_seconds{quantile="0.5"} 3.4507e-05
go_gc_duration_seconds{quantile="0.75"} 5.7043e-05
go_gc_duration_seconds{quantile="1"} 0.002476999
go_gc_duration_seconds_sum 0.104596342
go_gc_duration_seconds_count 1629
As of now, i can authenticate with my server via a POST request in Golang.
Please note that it isn't recommended to send application data to Prometheus via remote_write protocol, since Prometheus is designed to scrape metrics from the targets specified in Prometheus config. This is known as pull model, while you are trying to push metrics to Prometheus aka push model.
If you need pushing application metrics to Prometheus, then the following options exist:
Pushing metrics to pushgateway. Please read when to use the pushgateway before using it.
Pushing metrics to statsd_exporter.
Pushing application metrics to VictoriaMetrics (this is an alternative Prometheus-like monitoring system) via any supported text-based data ingestion protocol:
Prometheus text exposition format
Graphite
Influx line protocol
OpenTSDB
DataDog
JSON
CSV
All these protocols are much easier to implement and debug comparing to Prometheus remote_write protocol, since these protocols are text-based, while Prometheus remote_write protocol is a binary protocol (basically, it is snappy-compressed protobuf messages sent over HTTP).
I am using serverless-plugin-datadog, which uses datadog-lambda-layer under the hood.
The docs state, that by using this plugin it is not necessary to wrap a handler anymore. This is, by the way, the main reason why I decided to go for it.
The lambda itself is a REST API, which responds with dedicated status codes.
My question now is, how can I monitor the number of 4xx and 5xx http status codes? Do I have to define custom metrics in datadog for this to work? I was under the assumption that the plugin comes with those data out-of-the-box, but it looks like I'm missing an important part here.
but it looks like I'm missing an important part here.
That was the point. The lambda itself has not much todo with particular statusCodes. So I either may log each status code and let datadog parse it accordingly.
Or, that's the solution I went for, I can leverage API-Gateway for monitoring status codes per lambda.
I have following two scenario and for each one I need recommendation as to which NiFi processor to use:
I have Restful web services running outside NiFi. NiFi would like to get/post/delete/update some data by calling specific restful API. Once the Restful API receives request from NiFi it sends back the response to NiFi. Which NiFi processor to use here?
In 2nd scenario, I have an application running outside NiFi. This application has its own GUI. The user need some information so he want to send request to NiFi. In NiFi, is there any processor which accepts request from application, process the request, and sends response back?
I actually read all the question with getHTTP and invokeHTTP.
I have initially tried with invokeHTTP processor. I tried both get and post call using invokeHTTP. But I don't see any response from Restful API running outside NiFi.
I did not try getHTTP.
I am using NiFi. NiFi do not have code.
I expect NiFi should be able to call Restful API running outside. I expect NiFi should accept request coming from application running outside and process that request.
Yep, NiFi comes bundled with processors that satisfy both of your requirements.
For scenario #1, you can use either a combination of GetHTTP/PostHTTP which as their name implies are HTTP clients that make GET and POST calls respectively. However, later the community came up with InvokeHTTP that offers more features like support for NiFi Expression Language, support for incoming flowfiles, etc.,
For scenario #2, you can either use ListenHTTP or the combination of HandleHttpRequest/HandleHttpResponse. The later literally offers you have a more robust web-service implementation while the former is a simple web-hook kind. I haven't worked much with ListenHTTP so probably can't comment more on that.
Having said that, for your second scenario, if your objective is to consume NiFi statistics, you can directly hit NiFi's rest api, rather than having a separate NiFi flow with web service capability.
Useful Links
https://pierrevillard.com/2016/03/13/get-data-from-dropbox-using-apache-nifi/
https://dzone.com/articles/using-websockets-with-apache-nifi
https://ddewaele.github.io/http-communication-with-apache-nifi/
We are creating custom metrics in JMeter using beanshell scripting and saving them to a file.Our requirement is to send this metrics to InfluxDB. We tried using Backend Listener with Graphite and InfluxDB implementation client but couldn't send the custom values. Only the default Jmeter metrics are being passed.
Has anyone done this before, can you guide us to resolve this issues.
We are using Jmeter 3.3 and influxdb-1.4.2-1
Thanks,
BB
Two words: line protocol.
Another two words: custom listener (Beanshell/JSR223 with Groovy).
Marry them, and you'll have what you want.
I did that work once, and it didn't take long.
There may be other options (like, take this result file and feed it to script that shapes it to the same line protocol, but post-execution, not live) - but the one I suggest is the simplest.
To do it you can use /write endpoint as it described in influxdb.com
Image below shows how it can be done in Jmeter using "HTTP Request" sampler.
How to send custom data to influxDB:
In a DB it will looks like on image below:
I want to programatically dump logs from OpenWhisk in to an external service. I can do this by capturing log output and then posting it at the end of every action but this adds overhead to my function.
Is there a way to get this data from the OpenWhisk API similar to wsk activation logs ACTIVATION_ID?
Action logs are available through the platform API. Console output from actions (stdout or stderr) is stored in the activation records.
Activation records can be accessed by sending a HTTP request to the following endpoint:
/namespaces/{namespace}/activations/{activationid}/logs
Client libraries for accessing the API are available for multiple languages.