Apache Kafka JMX Metrics with Spring Boot - spring

All,
I have a requirement to expose the Apache Kafka metrics to the spring boot (v 2.3.0.RELEASE) actuator endpoint.
Please note, I am NOT using spring-kafka library .
I am using the following libraries
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>2.5.0</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.5.0</version>
</dependency>
I have tried this spring.jmx.enabled=true but seems like this doesnt work. I assume this is happening because spring is not managing the kafka.
Is there a way I can bind these JMX metrics to the micrometer MeterRegistry?
I am able to make this work using the jmx-exporter provided by prometheus, but since that requires an agent running on a different port, I was hoping to make this work with default Micrometer and spring boot.

Have you tried KafkaClientMetrics, KafkaStreamsMetrics or KafkaConsumerMetrics?
KafkaConsumerMetrics collects metrics through JMX but it is deprecated in favor of KafkaClientMetrics.
You only need to create them and bind to your registry:
KafkaConsumer<String, String> consumer;
KafkaProducer<String, String> producer;
MeterRegistry registry;
...
new KafkaClientMetrics(consumer).bindTo(registry);
new KafkaClientMetrics(producer).bindTo(registry);
Similarly for KafkaStreams:
new KafkaStreamsMetrics(kafkaStreams).bindTo(registry);
If Spring does not manage your Kafka components, why do you expect spring.jmx.enabled=true to do anything?

Related

Intercept route in Apache camel parallel

I need to save the details of my route in DB including the details between each patterns in camel route . I planned to use intercept (which will be defined in my CamelContext so all routes will in the context will be intercepted before each pattern process) to save details in DB.
But as I understood this will impact my applications performance as routes will be intercepted before each pattern processing.Is there any way to make intercept in camel to process parallel?
As another way I thought about using WireTap pattern in apache camel but I can't define it in context level So while writing every route I would need to explicitly write the WireTap pattern.This is little bit hectic as I am trying to reduce the complexity of developers who write the routes.As they only need to write routes and the other things that are needed will be done via what I defined in the CamelContext .Is there any way to write WireTap at context level and not in route level?.
Or some one please help with any other way which will help me to make this possible
Thanks in advance
Intercept with seda or try global OnCompletion
You could use intercept to call separate seda endpoint which is basically fire and forget asynchronous version of direct.
You could also try using global OnCompletion to get message history from complete exchanges.
But to be honest all this will affect performance and increase complexity for developers.
JMX
More conventional way to monitor routes would be to use jmx or jmx through jolokia. This allows you to monitor your camel application(s) using external application that could even access the application remotely. For example your spring application(s) might run in container(s) and so you could have another container to monitor all these applications and write data to your database.
Through jmx you could call your route to dumpRouteAsXml() to form a route diagram you mentioned here then you could poll dumpRouteStatsAsXml(true, true) periodically to get bunch of information about the route and each of its endpoint like endpoint id and exchangesCompleted that you can use in your database.
This doesn't require any changes to routes, but needs one to provide JVM some parameters to enable JVM, possibly some authentication and SSL configurations for security depending on the environment and some camel dependencies. Much of this can be done with project templates, docker images however.
Enabling JMX with JVM parameters
Disabling enabling camel-jmx
Managing Camel Routes With JMX APIs
JConsole - GUI application for JMX
Maven dependencies
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-management</artifactId>
<version>${camel.version}</version>
</dependency>
<!-- For Camel 3.x Spring boot -->
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-jmx-starter</artifactId>
<version>${camel.version}</version>
</dependency>
<!-- For Camel 2.x Spring boot -->
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jmx-starter</artifactId>
<version>${camel.version}</version>
</dependency>

Disabling Spring Cloud Bus still ends up starting RabbitMQ

I am using Spring boot 2.2.9.RELEASE and Spring Cloud Hoxton.SR7. I am using Spring Cloud Bus to signal all my containers in a docker swarm stack and when deployed in production with a running RabbitMQ cluster things work perfectly!
I am using the RabbitMQ implementation via the spring-cloud-starter-bus-amqp Spring Boot starter. We occasionally run tests without needing the bus. There is a spring boot flag for this:
spring.cloud.bus.enabled=false
this disables the bus, but rabbitMQ still starts, and spits out connection refused errors. I had to also add:
rabbitmq.autoStarting=false
I tried fussing around with disabling RabbitMQ's auto configuration, but it seems there is a RabbitAutoConfiguration class that implies it is a SB autoconfig class, but in actual fact it is a normal SB config class.
Is there a cleaner way to disable the Cloud Bus that also prevents RabbitMQ from starting?
You just need to include the spring-cloud-stream-test-support jar in your test scope. This jar includes binders that will override and replace the default binders. These test binders will not actually connect to the resources in the background.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-test-support</artifactId>
<version>${spring.cloud.stream.version}</version>
<scope>test</scope>
</dependency>

How to set up a cache fallback in SpringBoot when unable to connect to Redis?

There is a SpringBoot (v2.2.7) application, where a Redis cache is configured
fragment of pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
When the Redis service is up and available, caching works as expected: results of methods annotated with #Cacheable are being cached. Unfortunately, when Redis service is not available any call to a cacheable method leads to an exception RedisConnectionFailureException: Unable to connect to Redis.
I guess it is reasonable if the application could work (execute business logic) independently from the cache availability.
Possible solutions are:
custom implementation (i.e. a wrapper handling errors around the Redis cache)
standard configuration in Spring considering Redis cache service health (if there is such thing)
What is the proper way to set up a fallback cache in SpringBoot?

How to integrate Spring Boot 1.x actuator metrics with io.micrometer? Getting io.micrometer.influx.InfluxRegistry - failed to send metrics error

I have a Rest service implemented with Spring Boot 1.x. I'm trying to send metrics data to an existing influx db by leveraging actuator /metrics. I've found out that Micrometer project (http://micrometer.io/docs/influx#_install) supports Spring Boot integration, but I am unable to find any documentation on how to configure the project to talk to the influx db.
E.g: influxdb.connurl, username, dbname etc.
My /metrics works fine. When I make a rest request to my endpoint, because influx db conn is not configured, I'm getting this error:
** [spectator-spring-metrics-publisher-0] WARN io.micrometer.influx.InfluxRegistry - failed to send metrics **
dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
<version>${spring-boot.version}</version>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-spring-legacy</artifactId>
<version>${micrometer.version}</version>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-influx</artifactId>
<version>${micrometer.version}</version>
</dependency>
Is there a documentation somewhere how to write the metrics to influx db? I can write a parser for localhost access file, and install a Telegraf agent to send system metrics, but want to pursue this route first.
i am played around with micrometer and influx the other days. First of all you have to set some influx parameters in your application.properties/application.yml file.
spring:
metrics:
influx:
uri: http://localhost:8086/write
enabled: true
userName: root
password: root
step: PT10S
db: metrics
Make sure the database exists in your influx-db already. I did'nt find a solution to create the database automatically, if it not exists.
You can also create a Bean to configure your Metric-Registry.
You can use the registry to add some tags and capture additional metrics.
#Bean
MeterRegistryConfigurer configurer() {
return registry -> {
registry.config().commonTags("service", "tweets");
new ClassLoaderMetrics().bindTo(registry);
new JvmMemoryMetrics().bindTo(registry);
new JvmGcMetrics().bindTo(registry);
new ProcessorMetrics().bindTo(registry);
new JvmThreadMetrics().bindTo(registry);
};
}
I don't know if its the best solution, but it works for me. In my maven-file I only use the "micrometer-registry-influx" dependency.
After that you should receive metrics about your rest-endpoints in your influx-db.
I hope this helps you a little bit.

Configure Kafka consumer acknowledgement mode for Spring Boot Kafka project

I'm using Spring Boot version 1.5.2.RELEASE along with Spring Kafka version 1.1.2.RELEASE.
Via the application.properties file I do see available options (spring.kafka.consumer.*) to configure Kafka Consumer.
What I'm not able to find though is a way to configure the acknowledgement mode.
spring.kafka.listener.ack-mode=
You can use Spring Cloud Stream Kafka Binder to streaming messages. In that case
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
and configure consumer like this.
spring.cloud.stream.kafka.bindings.<channelName>.consumer..
and producer like this
spring.cloud.stream.kafka.bindings.<channelName>.producer..
to more detail follow this or this video

Resources