Get kafka broker, consumer, producer metrics using confluent-kafka-go - go

I cannot find any reference on implementation of getting metrics.
Can Someone help with an example and references?

As stats_example says here, You can get stats listed in STATISTICS.md. But clearly mentioned in the example comments, You need to implement metrics
Stats events are emitted as JSON (as string). Either directly forward
the JSON to your statistics collector, or convert it to a map to
extract fields of interest.
So in this case, In your application, you need to implement metrics collector something like prometheus
And if you want full broker side metrics, You can implement Kafka monitoring As Kafka Documentation explained here
Kafka uses Yammer Metrics for metrics reporting in the server. The
Java clients use Kafka Metrics, a built-in metrics registry that
minimizes transitive dependencies pulled into client applications.
Both expose metrics via JMX and can be configured to report stats
using pluggable stats reporters to hook up to your monitoring system.

Related

How can I get built-in metrics and add a custom metrics in Spring Boot Kafka Streams?

I have a problem to add a custom metrics in Kafka Streams.
I made a Kafka Streams application with Spring Boot like this. (Kafka Streams with Spring boot. Baeldung)
and deployed several of this app on k8s.
I want to know about avg number of processd message per second of each app instance. and it exists in Kafka Streams built-in thread metrics(process-rate). (ref. Kafka Streams Metrics)
But, that metric use thread-id at tag key and so each app instance has different metric tag key.
I'd like to use that metric value as the same tag key in each app instance.
So, I came up with a solution. It's about using that built-in metric value to add a new custom metric.
But, There's no specific information about how I get built-in metric values in source code and add a custom metric..
In ref, there's a way to add a custom metrics but no specific information about how can I apply in source code.
Is there a way to solve this problem? Or is there any other way?

How to monitor Apache Kafka metrics?

How can I build a microservice to monitor Kafka metrics?
I don't want to use the confluent control center or any other tool.
Before building anything like a microservice, I would explore the kafka exporter for Prometheus to expose Kafka metrics in prometheus format. You could then use Prometheus server to scrape these metrics and Grafana for dashboarding/visualisations. There's other tools you could use for scraping instead of Prometheus/Grafana, e.g. Elastic Metricbeat (which I mention because you've tagged the question with 'elasticsearch'), but the Prometheus/Grafana combination is quite easy to get up and running - there's also out-of-the-box Grafana dashboards that you can install without having to set this up manually e.g. this one.

Prometheus log metric exporter

I used Prometheus in a Java app to monitor the different number of logs in my system.
Once I added <Prometheus name="METRICS"/> to my log4j.xml appenders configuration my Prometheus metrics were populated with the number of info/error/debug messages that were logged in my system.
This was very helpful. I am trying to achieve the same functionality in a golang microservice which uses the default golang log.
Is there any native prometheus support for this kind of functionality or do i need to implement it myself?
Logger doesn't offer any hooks, so there's no way to create such a thing. What you'd want to do is put a wrapper on top of it, or use a different logging system.

Load testing with Kafka and Storm

Our system takes post request and send json body to kafka topic and this topic is configured as spout for topology and topology generate output message to a kafka topic.
How i can load test this system? Number of message processed per second by system.
I am planning to use jmeter for load testing
JMeter is the one which comes in our mind whenever we think about performance or load testing of Web APIs, however you can also check for alternatives like LoadRunner etc.
However, from Kafka and Storm point of view, you need to write storm kafka spout to read and commit offsets as well as you might need some monitoring tool to check and validate how it behaves under load like
Throughput (messages/sec) on size of data.
Throughput (messages/sec) on number of messages.
Performance at Producer side
Performance at Consumer side
I've tried Yahoo Kafka manager which is opensource and serves the purpose, might want give it a try: https://github.com/yahoo/kafka-manager, however there other monitoring tools also Kafka Monitoring tool in Production
If you want focus only Kafka load performance benchmarking then below command under your kafka distribution would be very helpful:
kafka-producer-perf-test.sh
kafka-consumer-perf-test.sh
For more details: https://gist.github.com/ueokande/b96eadd798fff852551b80962862bfb3
If you plan to use JMeter it makes sense to consider Pepper-Box - Kafka Load Generator plugin, it comes with PepperBoxKafkaSampler which provides handy UI allowing to specify your Kafka endpoints, topics, etc.
If you need also to read messages from the Kafka topics you can use JSR223 Test Elements and Groovy language for this, check out Apache Kafka - How to Load Test with JMeter and Writing a Kafka Consumer in Java articles for more information and example code snippets

Possible to export Spring metrics from Micrometer to Kafka?

I am playing around with Spring Boot v2 at the moment. So far, my set up looks like this:
Spring -> Telegraf -> Kafka -> Telegraf -> influx
I am wondering whether or not it's possible to take out the the first telegraf inbetween Spring and Kafka, so something like this:
Spring -> Kafka -> Telegraf -> Influx
I've looked at the configurations of micrometer and there is no config for Kafka. Also, telegraf was pulling data from Spring.. and as Kafka is a push model (i.e. you are pushing data into Kafka), would Spring be able to push data to Kafka? If yes, how? Through the use of HTTP POST methods?
New to the whole concept.
would Spring be able to push data to Kafka? If yes, how? Through the use of HTTP POST methods?
Kafka uses its own TCP protocol, not HTTP so no. At least not without using the Kafka REST Proxy.
You would basically be embedding the same thing that Telegraf does into your Spring code.
It's possible, sure, but built into Micrometer? Not that I'm aware of.
Plus, it would be overhead on your app having an internal producer thread, and you'd be required to include kafka clients with each of your monitored apps, plus have some control preventing your app from failing if Kafka connection isn't possible...
I would suggest keeping Telegraf installed on each host machine, or at the very least, Prometheus JMX exporter or Jolokia for your individual Java apps. From this, JMX metrics can be collected and pushed to downstream monitoring systems
Or, as commented, you could skip Kafka, but I'm guessing you want to keep it there as a buffer.
On the other side, you can use Kafka Connect Influxdb sink to get optimal performance of consumer scaling

Resources