It's not a huge problem but I'm curious where some extra stream consumers are coming from, and if that's a setting I can change.
I've got a very simple spring cloud stream consumer setup against a local Kafka broker. Here's the spring config
spring:
cloud:
stream:
bindings:
consumer-in-0:
destination: test-topic
group: test-group
And the consumer class itself:
#Bean
Consumer<Message<String>> consumer() {
return message -> System.out.println("Got it: " + message.getPayload());
}
When I run the app though, I can see 3 consumers created in the output. But when I check the consumer-group members in my local broker, it's always just one consumer, and it's always the second consumer created (i.e. with client id test-group-2)
Just for clarity, I'm using Spring Boot version 2.3.4.RELEASE and cloud dependencies version Hoxton.SR10.
And here's the dependencies in the pom:
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<dependencies>
Why am I getting 3 consumers? Why is the second one the only one that actually listens on the Kafka topic?
During start up, a temporary consumer is created to get information about the partitions provisioned for the topic.
The second consumer is the real consumer.
If you have the actuator (actually Micrometer) on the classpath the KafkaBinderMetrics creates another consumer so it can calculate the lag. It does not actually consume anything.
Related
I have a Spring boot application with Prometheus Pushgateway using Micrometer, mainly based on this tutorial: https://luramarchanjo.tech/2020/01/05/spring-boot-2.2-and-prometheus-pushgateway-with-micrometer.html
pom.xml has following related dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-core</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_pushgateway</artifactId>
<version>0.16.0</version>
</dependency>
And application.properties file has:
management.metrics.export.prometheus.pushgateway.enabled=true
management.metrics.export.prometheus.pushgateway.shutdown-operation=PUSH
management.metrics.export.prometheus.pushgateway.baseUrl=localhost:9091
This works fine if I leave the application running however with my particular Spring boot application, sometimes it looses the metrics sent just before the shutdown.
I can view the following logs which indicates the PrometheusPushGatewayManager is successfully calling the shutdown() method before the application shuts down which has configured with PUSH operation in the application.properties file as above:
level":"INFO","message":"Shutting down ExecutorService","file":"ExecutorConfigurationSupport.java","line_number":"208","thread_name":"Thread-1","#version":1,"logger_name":"org.springframework.boot.actuate.metrics.export.prometheus.PrometheusPushGatewayManager$PushGatewayTaskScheduler","class":"org.springframework.scheduling.concurrent.ExecutorConfigurationSupport"
I have tried to invoke the shutdown() method on PrometheusPushGatewayManager from my application code but still having the same issue where metrics are not appearing consistently in the Pushgateway/Prometheus (randomly).
I am unable to set group id in Spring cloud stream kafka consumer config using below :-
spring.cloud.stream.default-binder=kafka
spring.cloud.stream.kafka.binder.brokers=${kafka.bootstrap.servers}
spring.cloud.stream.bindings.INPUT.binder=kafka
spring.cloud.stream.bindings.INPUT.destination=datapipeline.ingestion.decision.topic
spring.cloud.stream.bindings.INPUT.content-type=application/json
spring.cloud.stream.bindings.INPUT.group=input-group-1
All above property are getting set except group and getting below in console log while starting service :
group.id = anonymous.cce5a71a-66fa-49c9-874b-09d5685713f7
Kindly help on this as i think because of this my consumer unable to read from where it left.
I am using Spring boot 2.1.3.RELEASE.
Few important dependency related to this, Also I am using Spring integration starter as well :-
<spring-cloud.version>Greenwich.RELEASE</spring-cloud.version>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
I am trying a very simple route with Spring Boot 1.5.2.RELEASE + Camel (Spring Boot Starter) + ActiveMQ, which is to read from a particular queue and then log it. However, it looks like it is not picking up my spring.activemq configuration for URL as I see in the log it is trying to connect to a different url and it continues to connect it and the my spring boot app never starts. The questions are based on my configuration that I am providing below how can I do the below:
Fix the configuration to allow spring's activemq configuration
Configure maxReconnectAttempts so that it does not try to connect forever if the URL is not reachable, which could be possible if the ActiveMQ instance goes down
Any assistance would be greatly appreciated. I did search relevant questions on stackoverflow but none gave me a solution to the issue I am facing
Error I am seeing on the console and this continues to like 60-70 attempts and counting. As you can see the broker URL that camel is picking up is some default URL that probably spring has configured by default
Failed to connect to [tcp://localhost:61616] after: 10 attempt(s) continuing to retry.
Here are my current configurations / code:
pom.xml - relevant portion
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.2.RELEASE</version>
</parent>
<dependencyManagement>
<dependencies>
<!-- Spring Cloud is part of the project where I am configuring camel routes -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Camden.SR5</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-spring-boot-dependencies</artifactId>
<version>2.19.2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<!-- I have this as the same project works as a web app as well
and therefore I do not need the
camel.springboot.main-run-controller=true configuration to be set
which is as per camel's spring boot documentation-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- Camel - start -->
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-camel</artifactId>
</dependency>
<!-- Camel - end -->
</dependencies>
application.yml (Spring Boot ActiveMQProperties)
spring:
activemq:
brokerUrl: tcp://my.company.host:[port] //This port is up and running
user: user
password: password
Camel Route in JAVA
package com.mycamel.route;
import org.apache.camel.builder.RouteBuilder;
import org.springframework.stereotype.Component;
#Component
public class SampleAmqCamelRouter extends RouteBuilder {
#Override
public void configure() throws Exception {
from("activemq:some.queue").to("log:com.mycamel.route?level=INFO&groupSize=10");
}
}
First you should add the spring-boot-starter-activemq dependency to your pom.xml. Then you can use its AutoConfiguration capabilities which will create a ConnectionFactory based on the properties you have specified in your application.yml.
After that you have to configure Camel's ActiveMQComponent too. If you would like to reuse the ConnectionFactory (which created by the autoconfig) then it can be achievable with the following:
#Configuration
public class ActiveMQComponentConfig {
#Bean(name = "activemq")
public ActiveMQComponent createComponent(ConnectionFactory factory) {
ActiveMQComponent activeMQComponent = new ActiveMQComponent();
activeMQComponent.setConnectionFactory(factory);
return activeMQComponent;
}
}
You can find more information in Camel's ActiveMQ documentation.
I'm trying to integrate hawtio in a spring-boot application using apache camel. I followed Spring-Boot Embedded Wars and added HawtioConfiguration from How to run hawt.io in spring boot application with embedded tomcat (except for the kubeservice and kubepod which are not in io.hawt.web package)
So, that works, up to the point where I try to manualy send a message to a direct endpoint from the hawtio interface ( http://localhost:8080/hawtio/index.html#/camel/sendMessage?tab=camel&nid=root-org.apache.camel-camel-1-endpoints-%22direct:%2F%2Fdummy%22 ) . The following warning appears, and no message is sent:
Camel does not support sending to this endpoint.
So, did I forget anything ? Here is my set up : springboot 1.3.3.RELEASE with the following dependencies :
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-spring-boot-starter</artifactId>
<version>2.17.0</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jdbc</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jersey</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-actuator</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.hawt</groupId>
<artifactId>hawtio-springboot</artifactId>
<version>1.4.64</version>
</dependency>
<dependency>
<groupId>io.hawt</groupId>
<artifactId>hawtio-core</artifactId>
<version>1.4.64</version>
</dependency>
and the Application.java :
#SpringBootApplication
#EnableHawtio
#EnableAutoConfiguration(exclude={DataSourceAutoConfiguration.class})
public class Application {
#Autowired
private ServletContext servletContext;
public static void main(String[] args) {
System.setProperty(AuthenticationFilter.HAWTIO_AUTHENTICATION_ENABLED, "false");
SpringApplication.run(Application.class, args);
}
#PostConstruct
public void init() {
final ConfigManager configManager = new ConfigManager();
configManager.init();
servletContext.setAttribute("ConfigManager", configManager);
}
}
Thanks !
edit: using hawtio as a standalone app and connecting to springboot works fine
edit2: moving on, I used hawtio as a war on another project (same version) , deployed on a tomcat 7. Same issue, cannot send to a direct endpoint.
go figure.
The option of using hawtio as war works very well with application using spring boot and camel, I am already using it successfully.
check the hawtio github code example, it contains good samples to try
https://github.com/hawtio/hawtio
Also I will share my github link with sample's of using hawtio wat or as maven plug-in.
I had the same problem: I could not use hawt.io to send to a direct endpoint (nor any other endpoint). Maybe it is a general Bug/ unimplemented feature in hawt.io?
However, the following was possible:
Copy the Endpoint URL
Select the context node in hawt.io
Click on "Operations"
Use the method sendStringBody(java.lang.String,java.lang.String) to send a string to that endpoint
First parameter is the endpoint URI that you can paste in
What's the difference between the below two dependencies? Do i really need the first one to make a consumer or producer app?
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.9.2</artifactId>
<version>0.8.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.8.2.1</version>
</dependency>
</dependencies>
My Producer works fine with just the first one , but the consumer needs the second one.
I had thought the "kafka-clients" artifact would work for both producer and consumer. But looks like "kafka.consumer.Consumer" comes from the other dependency. Why is there a difference?
Also, why is the first artifact named as kafka_2.9.2? i.e why is a version identifier in the name?
If you want to use the latest producer and consumer API then the correct Maven coordinates are:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.9.0.0</version>
</dependency>
See the API documentation for more.