I have a Spring Boot Application which consumes data from kafka topic. I am using Micrometer and Influxdb for monitoring purpose. I read in documentation that, By adding micrometer-registry-influx we automatically enable exporting data to InfluxDB. I have some below questions on this -
What kind of data micrometer automatically adds to InfluxDB?
Can we add custom data to InfluxDB according to my application?
How can I publish custom or my application specific data to InfluxDB?
How can I disable adding default data to InfluxDB?
As I understand from the documentation, the standard output set is described here
Adding your own metrics
Metrics filter (here you can exclude standard metrics accordingly)
Related
I have a problem to add a custom metrics in Kafka Streams.
I made a Kafka Streams application with Spring Boot like this. (Kafka Streams with Spring boot. Baeldung)
and deployed several of this app on k8s.
I want to know about avg number of processd message per second of each app instance. and it exists in Kafka Streams built-in thread metrics(process-rate). (ref. Kafka Streams Metrics)
But, that metric use thread-id at tag key and so each app instance has different metric tag key.
I'd like to use that metric value as the same tag key in each app instance.
So, I came up with a solution. It's about using that built-in metric value to add a new custom metric.
But, There's no specific information about how I get built-in metric values in source code and add a custom metric..
In ref, there's a way to add a custom metrics but no specific information about how can I apply in source code.
Is there a way to solve this problem? Or is there any other way?
I cannot find any reference on implementation of getting metrics.
Can Someone help with an example and references?
As stats_example says here, You can get stats listed in STATISTICS.md. But clearly mentioned in the example comments, You need to implement metrics
Stats events are emitted as JSON (as string). Either directly forward
the JSON to your statistics collector, or convert it to a map to
extract fields of interest.
So in this case, In your application, you need to implement metrics collector something like prometheus
And if you want full broker side metrics, You can implement Kafka monitoring As Kafka Documentation explained here
Kafka uses Yammer Metrics for metrics reporting in the server. The
Java clients use Kafka Metrics, a built-in metrics registry that
minimizes transitive dependencies pulled into client applications.
Both expose metrics via JMX and can be configured to report stats
using pluggable stats reporters to hook up to your monitoring system.
I'm trying to get micrometer metrics data to Splunk. Each metric endpoint gives the current value of a metric, so I would need Splunk to send a http request to my application periodically, or I can write the metric values to a log file periodically.
So how do I get my application to write the metric values to logs?
If you are in spring boot 2.x and Micrometer is of version 1.1.0+ you can create a bean of
periodic (1 minute) special logging registry see (https://github.com/micrometer-metrics/micrometer/issues/605)
#Bean
LoggingMeterRegistry loggingMeterRegistry() {
return new LoggingMeterRegistry();
}
This is by far the easiest way to log everything via logging system.
Another alternative is creating a scheduled job that will run some method on a bean with injected metering registry that will iterate over all the metrics (with possibly filtering out the metrics that you won't need) and preparing the log of your format.
If you think about this, this is exactly what the metrics endpoint of spring boot actuator does, except returning the data via http instead of writing to log.
Here is an up-to-date implementation of the relevant endpoint from the spring boot actuator source
I am running a Kafka Stream app with Springboot 2.
I would like to have my kafka stream metrics available in the prometheus format at host:8080/actuator/prometheus
I don't manage to have this. I am not sure I understand how kafka stream metrics are exported.
Can actuator get these JMX metrics ?
Is there a way to get these metrics and expose them in Prometheus format ?
PS: didn't worked with java jmx_prometheus_agent neither
Does someone has a solution or an example ?
Thank you
You could produce all available Kafka-Streams metrics (the same as from KafkaStreams.metrics()) into Prometheus using micrometer-core and spring-kafka libraries. For integrating Kafka-Streams with micrometer, you could have KafkaStreamsMicrometerListener bean:
#Bean
KafkaStreamsMicrometerListener kafkaStreamsMicrometerListener(MeterRegistry meterRegistry) {
return new KafkaStreamsMicrometerListener(meterRegistry);
}
where MeterRegistry is from micrometer-core dependency.
If you create Kafka Streams using StreamsBuilderFactoryBean from spring-kafka, then you need to add listener into it:
streamsBuilderFactoryBean.addListener(kafkaStreamsMicrometerListener);
And if you create KafkaStreams objects directly, then on each KafkaStreams object you need to invoke
kafkaStreamsMicrometerListener.streamsAdded(beanId, kafkaStreams);
where beanId is any unique identifier per KafkaStreams object.
As a result, Kafka Streams provides multiple useful Prometheus metrics, like kafka_consumer_coordinator_rebalance_latency_avg, kafka_stream_thread_task_closed_rate, etc. KafkaStreamsMicrometerListener under the hood uses KafkaStreamsMetrics.
If you need to have Grafana Prometheus graphs with these metrics, you need to add them as Gauge metric type.
I don't have a complete example, but metrics are well accessible and documented in Confluent documentation on Monitoring Kafka Streams.
Maybe dismiss actuator and use #RestController from Spring Web along with KafkaStreams#metrics() to publish exactly what you need.
According to Spring release notes, spring-integration-aws.1.1.0.M1 does not include DynamoDB MetaDataStore implementation. There is still ConcurrentMetadataStore class which is a key-value based store and based on implementation I suppose it maps streams with latest sequence number read. But it does not use any data store as to retrieve checkpoints.
I am using spring integration for kinesis consuming and need to implement checkpointing. I am wondering if I need to do it manually by connecting to DynamoDB and always update checkpoints or there is another way of doing it using spring framework?
P.S: I can't use Spring Cloud KinesisBinderConfiguration as I dynamically consume events from a list of configurable streams.
Thank you
If you are not talking about Spring Cloud Stream and the AWS Kinesis Binder implementation, then I don't see any blockers for you to upgrade your solution to the Spring Integration AWS 2.0 and go ahead with already provided DynamoDbMetaDataStore.
Or if that is so hard for you to move to the Spring Integration 5.0, then you simply can consider to copy/paste an implementation to your own class and inject it into the KinesisMessageDrivenChannelAdapter: https://github.com/spring-projects/spring-integration-aws/blob/master/src/main/java/org/springframework/integration/aws/metadata/DynamoDbMetaDataStore.java
Although it is really available in the 1.1.0.RELEASE - I don't see reason for your to stick with the 1.1.0.M1: https://spring.io/blog/2017/11/27/spring-integration-for-aws-1-1-ga-available