spring cloud stream - consumer group bound - spring-boot

My consumer is bound to anonymous consumer group instead of consumer group I specified.
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost
defaultBrokerPort: 9092
zkNodes: localhost
defaultZkPort: 2181
bindings:
inEvent:
group: eventin
destination: event
outEvent:
group: eventout
destination: processevent
My Spring boot applicaition
#SpringBootApplication
#EnableBinding(EventStream.class)
public class ConsumerApplication {
public static void main(String[] args) {
SpringApplication.run(ConsumerApplication.class, args);
}
#StreamListener(value = "inEvent")
public void getEvent(Event event){
System.out.println(event.name);
}
}
My input output channel interface
public interface EventStream {
#Input("inEvent")
SubscribableChannel inEvent();
#Output("outEvent")
MessageChannel outEvent();
}
my console log--
: Started ConsumerApplication in 3.233 seconds (JVM running for 4.004)
: [Consumer clientId=consumer-3, groupId=anonymous.0d0c87d6-ef39-4bfe-b475-4491c40caf6d] Discovered
group coordinator singh:9092 (id: 2147483647 rack: null)
: [Consumer clientId=consumer-3, groupId=anonymous.0d0c87d6-ef39-4bfe-b475-4491c40caf6d] Revoking
previously assigned partitions []
: partitions revoked: []
: [Consumer clientId=consumer-3, groupId=anonymous.0d0c87d6-ef39-4bfe-b475-4491c40caf6d] (Re-)joining
group
: [Consumer clientId=consumer-3, groupId=anonymous.0d0c87d6-ef39-4bfe-b475-4491c40caf6d] Successfully
joined group with generation 1
: [Consumer clientId=consumer-3, groupId=anonymous.0d0c87d6-ef39-4bfe-b475-4491c40caf6d] Setting newly
assigned partitions [inEvent-0]
: [Consumer clientId=consumer-3, groupId=anonymous.0d0c87d6-ef39-4bfe-b475-4491c40caf6d] Resetting
offset for partition inEvent-0 to offset 2.
: partitions assigned: [inEvent-0]

The group property must not be in the kafka tree.
It has to be like this:
My consumer is bound to anonymous consumer group instead of consumer group I specified.
spring:
cloud:
stream:
bindings:
inEvent:
group: eventin
destination: event
See more info in the Docs: http://cloud.spring.io/spring-cloud-static/spring-cloud-stream/2.1.1.RELEASE/single/spring-cloud-stream.html#consumer-groups
The group is a common property, so it is the same independently of the binder implementation. The kafka is for Apache Kafka specific properties, exposed on its binder implementation level.

Related

Kafka consumer LeaveGroup request to coordinator

I have one interesting scenario. Seems like when there are no new topics to pick up (at least that's what I think is happening), my consumer suddenly shuts down.
I am using Kotlin + Spring Boot Kafka Producer and Consumer. My consumer is configured like this:
spring:
profiles:
active: ${SPRING_PROFILE:prod}
cloud:
stream:
bindingRetryInterval: ${BINDING_RETRY_INTERVAL:0}
default:
contentType: application/*+avro
group: my-app
consumer:
useNativeDecoding: false
concurrency: ${CONSUMER_CONCURRENCY:3}
maxAttempts: ${CONSUMER_MAX_ATTEMPTS:3}
bindings:
outbox:
# Topic to consume from
destination: my_outbox_topic
kafka:
bindings:
outbox:
consumer:
enableDlq: ${ENABLE_DLQ:false}
binder:
brokers: ${BOOTSTRAP_SERVERS:localhost:9092}
configuration.ssl.endpoint.identification.algorithm: ${SSL_ALGORITHM:}
consumerProperties:
schema.registry.url: ${SCHEMA_REGISTRY_URL:http://localhost:8081}
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
specific.avro.reader: false
So I start the app, it consumes for a while and then I get these types of logs:
2021-09-28 12:57:38.113 INFO 4254 --- [container-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2,
groupId=youscript-app] Member consumer-2-dc0e4115-6e9a-4a99-9d04-a3a2efb2c7b3 sending LeaveGroup request to coordinator __MASKES__:9094 (id: 2147483646 rack: null)
2021-09-28 12:57:38.113 INFO 4254 --- [container-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-2, groupId=my-app] Unsubscribed all topics or patterns and assigned partitions
2021-09-28 12:57:38.113 INFO 4254 --- [container-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2021-09-28 12:57:38.120 INFO 4254 --- [container-1-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-3, groupId=my-app] Member consumer-3-701431a3-7cb9-4962-8f89-f2df41a176fd sending LeaveGroup request to coordinator __MASKED__:9094 (id: 2147483646 rack: null)
2021-09-28 12:57:38.120 INFO 4254 --- [container-1-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-3, groupId=my-app] Unsubscribed all topics or patterns and assigned partitions
2021-09-28 12:57:38.120 INFO 4254 --- [container-1-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2021-09-28 12:57:38.189 INFO 4254 --- [container-1-C-1] essageListenerContainer$ListenerConsumer : my-app: Consumer stopped
2021-09-28 12:57:38.197 INFO 4254 --- [container-0-C-1] essageListenerContainer$ListenerConsumer : my-app: Consumer stopped

Spring Boot Kafka Startup error "Connection to node -1 could not be established. Broker may not be available."

I am trying to start Spring-Kafka with Spring Boot 2.1.7.RELEASE on localhost with Java 12.
Getting the Error:
"org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-1, groupId=inter] Connection to node -1 could not be established. Broker may not be available."
I tried switching the Java Version to 11 and 8 and various Properties
spring:
kafka:
consumer:
#bootstrap-servers: localhost:9092
group-id: inter
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
bootstrap-servers: localhost:9092
#Service
public class KafkaHalloWorldMessagingService {
private KafkaTemplate<String, String> kafkaTemplate;
#Autowired
public KafkaHalloWorldMessagingService(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
public void sendHalloToTheSystem(String messageToSend) {
kafkaTemplate.send("interlinked.hallo.topic", messageToSend);
}
}
#Component
public class KafkaHalloWorldListener {
#KafkaListener(topics = "interlinked.hallo.topics", groupId = "inter")
public void handle(String messageToListenTo) {
System.out.println(messageToListenTo.toUpperCase());
}
}
2019-08-22 16:25:20.580 WARN 5865 --- [ restartedMain] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-1, groupId=inter] Connection to node -1 could not be established. Broker may not be available.
Make sure the bootstrap server value in the yml file and the listener in the Kafka server.properties file is same.
Update these two values in the server.properties file.
It can be seen in the config folder of Kafka download directory.
zookeeper.connect=Your IpV4 addrees:2181
listeners=PLAINTEXT://Your IpV4 addrees:9092
eg:zookeeper.connect=10.147.2.161:2181
And why is the consumer's boot strap server property commented out?
Please use the producer's boot strap server value for consumer too.
spring.kafka.bootstrap-servers = Your IpV4 addrees:9092
Or split
producer:
bootstrap-servers: =Your IpV4 addrees:9092
consumer:
bootstrap-servers: =Your IpV4 addrees:9092:
Hope your zookeeper and kafka is up.

spring config server not posting to rabbitmq

I've generated a spring boot config server from spring's initialzr.
I've installed rabbitmq with brew. initialzr generated with boot version 2.1.1.RELEASE and cloud version Greenwich.M3.
the simple rest services are connecting to rabbitmq queues. the config server is connection to a gitlab config repo.
But when I commit and push a change to it, the change is not reflected by the service application. The config server gets logs messages when the push is completed. Can anyone say what might be wrong? No messages ever seem to appear in rabbitmq console. I have been able to refresh the properties via actuator/bus-refresh through rabbitmq though.
config-server log messages on commit of change to config-repo's employee-service.yml file:
2018-12-07 11:53:12.185 INFO 84202 --- [nio-8888-exec-1] o.s.c.c.monitor.PropertyPathEndpoint : Refresh for: employee:service
2018-12-07 11:53:12.228 INFO 84202 --- [nio-8888-exec-1] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$b43cc593] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-12-07 11:53:12.253 INFO 84202 --- [nio-8888-exec-1] o.s.boot.SpringApplication : No active profile set, falling back to default profiles: default
2018-12-07 11:53:12.259 INFO 84202 --- [nio-8888-exec-1] o.s.boot.SpringApplication : Started application in 0.072 seconds (JVM running for 3075.606)
2018-12-07 11:53:12.345 INFO 84202 --- [nio-8888-exec-1] o.s.cloud.bus.event.RefreshListener : Received remote refresh request. Keys refreshed []
2018-12-07 11:53:12.345 INFO 84202 --- [nio-8888-exec-1] o.s.c.c.monitor.PropertyPathEndpoint : Refresh for: employee-service
2018-12-07 11:53:12.377 INFO 84202 --- [nio-8888-exec-1] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$b43cc593] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-12-07 11:53:12.398 INFO 84202 --- [nio-8888-exec-1] o.s.boot.SpringApplication : No active profile set, falling back to default profiles: default
2018-12-07 11:53:12.402 INFO 84202 --- [nio-8888-exec-1] o.s.boot.SpringApplication : Started application in 0.056 seconds (JVM running for 3075.749)
2018-12-07 11:53:12.489 INFO 84202 --- [nio-8888-exec-1] o.s.cloud.bus.event.RefreshListener : Received remote refresh request. Keys refreshed []
config-server has this application.yml:
---
server:
port: ${PORT:8888}
spring:
cloud:
bus:
enabled: true
config:
server:
git:
uri: ${CONFIG_REPO_URI:git#gitlab.<somedomain>:<somegroup>/config-repo.git}
search-paths:
- feature/initial-repo
main:
banner-mode: "off"
and ConfigServerApplication.java:
#SpringBootApplication
#EnableConfigServer
public class ConfigServerApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigServerApplication.class, args);
}
}
and these gradle dependencies:
dependencies {
implementation('org.springframework.cloud:spring-cloud-config-server')
implementation('org.springframework.cloud:spring-cloud-starter-stream-rabbit')
implementation('org.springframework.cloud:spring-cloud-config-monitor')
testImplementation('org.springframework.boot:spring-boot-starter-test')
implementation('org.springframework.cloud:spring-cloud-stream-test-support')
}
service has this applciation.yml:
---
server:
port: 8092
management:
security:
enabled: "false"
endpoints:
web:
exposure:
include:
- '*'
spring:
main:
banner-mode: "off"
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
this bootstrap.yml:
---
spring:
application:
name: employee-service
cloud:
config:
uri:
- http://localhost:8888
label: feature(_)initial-repo
these gradle dependencies:
dependencies {
implementation('org.springframework.boot:spring-boot-starter-web')
implementation('org.springframework.cloud:spring-cloud-starter-config')
implementation('org.springframework.cloud:spring-cloud-starter-bus-amqp')
implementation('org.springframework.boot:spring-boot-starter-actuator')
testImplementation('org.springframework.boot:spring-boot-starter-test')
}
this main class:
#SpringBootApplication
public class EmployeeServiceApplication {
public static void main(String[] args) {
SpringApplication.run(EmployeeServiceApplication.class, args);
}
}
and this controller class:
#RefreshScope
#RestController
public class WelcomeController {
#Value("${app.service-name}")
private String serviceName;
#Value("${app.shared.attribute}")
private String sharedAttribute;
#GetMapping("/service")
public String getServiceName() {
return "service name [" + this.serviceName + "]";
}
#GetMapping("/shared")
public String getSharedAttribute() {
return " application.yml [" + this.sharedAttribute + "]";
}
}
Try to create and build your project with Maven instead of Gradle.
I experience the same problem. I have two identical apps with the same RabbitMQ dependencies, one is build with Maven and second with Gradle. The Maven-based app publishes things to RabbitMQ as expected. The very same app, built with Gradle doesn't establish connection to RabbitMQ and is not publishing events.
To further clarify things, I run both apps in Eclipse with Spring Tools.

How to stop micro service with Spring Kafka Listener, when connection to Apache Kafka Server is lost?

I am currently implementing a micro service, which reads data from Apache Kafka topic. I am using "spring-boot, version: 1.5.6.RELEASE" for the micro service and "spring-kafka, version: 1.2.2.RELEASE" for the listener in the same micro service. This is my kafka configuration:
#Bean
public Map<String, Object> consumerConfigs() {
return new HashMap<String, Object>() {{
put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
put(ConsumerConfig.GROUP_ID_CONFIG, groupIdConfig);
put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetResetConfig);
}};
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
I have implemented the listener via the #KafkaListener annotation:
#KafkaListener(topics = "${kafka.dataSampleTopic}")
public void receive(ConsumerRecord<String, String> payload) {
//business logic
latch.countDown();
}
I need to be able to shutdown the micro service, when the listener looses connection to the Apache Kafka server.
When I kill the kafka server I get the following message in the spring boot log:
2017-11-01 19:58:15.721 INFO 16800 --- [ 0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Marking the coordinator 192.168.0.4:9092 (id: 2145482646 rack: null) dead for group TestGroup
When I start the kafka sarver, I get:
2017-11-01 20:01:37.748 INFO 16800 --- [ 0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator 192.168.0.4:9092 (id: 2145482646 rack: null) for group TestGroup.
So clearly the Spring Kafka Listener in my micro service is able to detect when the Kafka Server is up and running and when it's not. In the book by confluent Kafka The Definitive Guide in chapter But How Do We Exit? it is said that the wakeup() method needs to be called on the Consumer, so that a WakeupException would be thrown. So I tried to capture the two events (Kafka server down and Kafka server up) with the #EventListener tag, as described in the Spring for Apache Kafka documentation, and then call wakeup(). But the example in the documentation is on how to detect idle consumer, which is not my case. Could someone please help me with this. Thanks in advance.
I don't know how to get a notification of the server down condition (in my experience, the consumer goes into a tight loop within the poll()).
However, if you figure that out, you can stop the listener container(s) which will wake up the consumer and exit the tight loop...
#Autowired
private KafkaListenerEndpointRegistry registry;
...
this.registry.stop();
2017-11-01 16:29:54.290 INFO 21217 --- [ad | so47062346] o.a.k.c.c.internals.AbstractCoordinator : Marking the coordinator localhost:9092 (id: 2147483647 rack: null) dead for group so47062346
2017-11-01 16:29:54.346 WARN 21217 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : Connection to node 0 could not be established. Broker may not be available.
...
2017-11-01 16:30:00.643 WARN 21217 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : Connection to node 0 could not be established. Broker may not be available.
2017-11-01 16:30:00.680 INFO 21217 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
You can improve the tight loop by adding reconnect.backoff.ms, but the poll() never exits so we can't emit an idle event.
spring:
kafka:
consumer:
enable-auto-commit: false
group-id: so47062346
properties:
reconnect.backoff.ms: 1000
I suppose you could enable idle events and use a timer to detect if you've received no data (or idle events) for some period of time, and then stop the container(s).

Hazelcast in Spring Boot ignoring my NetworkConfig

I'm using Hazelcast as my main data store backed by JPA to a database. I'm trying to get it to not use multicast to find other instances in our development environment since we're working on different classes, etc. and pointing at our own databases, but Hazelcast is still connecting up. I know it's calling my HazelcastConfiguration class, but it's also using the hazelcast-defaults.xml in the jar file and creating a cluster.
#Bean(name = "hazelcastInstance")
public HazelcastInstance getHazelcastInstance(Config config) {
return new HazelcastInstanceFactory(config).getHazelcastInstance();
}
#Bean(name = "hazelCastConfig")
public Config config ()
{
MapConfig userMapConfig = buildUserMapConfig();
...
Config config = new Config();
config.setNetworkConfig(buildNetworkConfig());
return config;
}
private NetworkConfig buildNetworkConfig () {
NetworkConfig networkConfig = new NetworkConfig();
JoinConfig join = new JoinConfig();
MulticastConfig multicastConfig = new MulticastConfig();
multicastConfig.setEnabled(false);
join.setMulticastConfig(multicastConfig);
TcpIpConfig tcpIpConfig = new TcpIpConfig();
tcpIpConfig.setEnabled(false);
join.setTcpIpConfig(tcpIpConfig);
networkConfig.setJoin(join);
return networkConfig;
}
Now I can see that these are being called and it has to be using my configuration because the entities get backed to my database, but I also get this at startup:
2017-06-20 14:41:24.311 INFO 3741 --- [ main] c.h.i.cluster.impl.MulticastJoiner : [10.10.0.125]:5702 [dev] [3.8.1] Trying to join to discovered node: [10.10.0.127]:5702
2017-06-20 14:41:34.870 INFO 3741 --- [ached.thread-14] c.hazelcast.nio.tcp.InitConnectionTask : [10.10.0.125]:5702 [dev] [3.8.1] Connecting to /10.10.0.127:5702, timeout: 0, bind-any: true
2017-06-20 14:41:34.972 INFO 3741 --- [ached.thread-14] c.h.nio.tcp.TcpIpConnectionManager : [10.10.0.125]:5702 [dev] [3.8.1] Established socket connection between /10.10.0.125:54917 and /10.10.0.127:5702
2017-06-20 14:41:41.181 INFO 3741 --- [thread-Acceptor] c.h.nio.tcp.SocketAcceptorThread : [10.10.0.125]:5702 [dev] [3.8.1] Accepting socket connection from /10.10.0.146:60449
2017-06-20 14:41:41.183 INFO 3741 --- [ached.thread-21] c.h.nio.tcp.TcpIpConnectionManager : [10.10.0.125]:5702 [dev] [3.8.1] Established socket connection between /10.10.0.125:5702 and /10.10.0.146:60449
2017-06-20 14:41:41.185 INFO 3741 --- [ration.thread-0] com.hazelcast.system : [10.10.0.125]:5702 [dev] [3.8.1] Cluster version set to 3.8
2017-06-20 14:41:41.187 INFO 3741 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [10.10.0.125]:5702 [dev] [3.8.1]
Members [3] {
Member [10.10.0.127]:5702 - e02dd47f-7bac-42d6-abf9-eeb62bdb1884
Member [10.10.0.146]:5702 - 9239d33e-3b60-4bf5-ad81-da14524197ca
Member [10.10.0.125]:5702 - 847d0008-6540-438d-bea6-7d8b19b8141a this
}
Anyone got ideas?
An easy way to configure TCP IP Cluster is to use hazelcast.xml config file.
<hazelcast>
...
<network>
<port auto-increment="true">5701</port> // check if this is valid for the usecase
<join>
<multicast enabled="false">
</multicast>
<tcp-ip enabled="true">
<hostname>machine1</hostname>
<hostname>machine2</hostname>
<hostname>machine3:5799</hostname>
<interface>192.168.1.0-7</interface> // set values as per your env
<interface>192.168.1.21</interface>
</tcp-ip>
</join>
...
</network>
...
</hazelcast>
As configuration below shows, while enable attribute of multicast is set to false, tcp-ip has to be set to true. For the none-multicast option, all or subset of cluster members' hostnames and/or ip addresses must be listed. Note that all of the cluster members don't have to be listed there but at least one of them has to be active in cluster when a new member joins. The tcp-ip tag accepts an attribute called "conn-timeout-seconds". The default value is 5. Increasing this value is recommended if you have many IP's listed and members can not properly build up the cluster.
Loading Configuration File
Add a hazelcast.xml file to the src/main/resources folder and spring boot will auto configure hazelcast for you. You can optionally configure the location of the hazelcast.xml file in your properties or YAML file using spring.hazelcast.config configuration property.
# application.yml
spring:
hazelcast:
config: classpath:[path To]/hazelcast.xml
# application.properties
spring.hazelcast.config=classpath:[path To]/hazelcast.xml
The problem I was having was with Apache Camel and their HazelcastComponent. It doesn't automatically pick up your Hazelcast instance. When I configured the HazelcastComponent like this it started using the correct HazelcastInstance:
#Bean(name = "hazelcast")
HazelcastComponent hazelcastComponent() {
HazelcastComponent hazelcastComponent = new HazelcastComponent();
hazelcastComponent.setHazelcastInstance(hazelcastInstance);
return hazelcastComponent;
}

Resources