My spring/java consumer is not able to access the message produced by producer. However, when I run the consumer from console/terminal it is able to receive the message produced by spring/java producer.
Consumer Configuration :
#Component
#ConfigurationProperties(prefix="kafka.consumer")
public class KafkaConsumerProperties {
private String bootstrap;
private String group;
private String topic;
public String getBootstrap() {
return bootstrap;
}
public void setBootstrap(String bootstrap) {
this.bootstrap = bootstrap;
}
public String getGroup() {
return group;
}
public void setGroup(String group) {
this.group = group;
}
public String getTopic() {
return topic;
}
public void setTopic(String topic) {
this.topic = topic;
}
}
Listener Configuration :
#Configuration
#EnableKafka
public class KafkaListenerConfig {
#Autowired
private KafkaConsumerProperties kafkaConsumerProperties;
#Bean
public Map<String, Object> getConsumerProperties() {
Map<String, Object> properties = new HashMap<>();
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaConsumerProperties.getBootstrap());
properties.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaConsumerProperties.getGroup());
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
return properties;
}
#Bean
public Deserializer stringKeyDeserializer() {
return new StringDeserializer();
}
#Bean
public Deserializer transactionJsonValueDeserializer() {
return new JsonDeserializer(Transaction.class);
}
#Bean
public ConsumerFactory<String, Transaction> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(getConsumerProperties(), stringKeyDeserializer(), transactionJsonValueDeserializer());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Transaction> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Transaction> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConcurrency(1);
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
Kafka Listener :
#Service
public class TransactionConsumer {
private static final Logger LOGGER = LoggerFactory.getLogger(Transaction.class);
#KafkaListener(topics={"transactions"}, containerFactory = "kafkaListenerContainerFactory")
public void onReceive(Transaction transaction) {
LOGGER.info("transaction = {}",transaction);
}
}
Consumer Application :
#SpringBootApplication
public class ConsumerApplication {
public static void main(String[] args) {
SpringApplication.run(ConsumerApplication.class, args);
}
}
TEST CASE 1 : PASS
I started my spring/java producer and run the consumer from console. When I produce message form producer my console consumer is able to access the message.
TEST CASE 2 : FAILED
I started my spring/java consumer and run the producer from console. When I produce message form console producer my spring/java consumer is not able to access the message.
TEST CASE 3 : FAILED
I started my spring/java consumer and run the spring/java producer. When I produce message form spring/java producer my spring/java consumer is not able to access the message.
Question
Is there anything wrong in my consumer code ?
Am I missing any configuration for my kafka-listener?
Do I need to explicitly run the listener? (I don't think so since I can see on the terminal log connecting to topic, still I am not sure)
Okay you are missing AUTO_OFFSET_RESET_CONFIG in Consumer Configs
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
auto.offset.reset
What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted):
earliest: automatically reset the offset to the earliest offset
latest: automatically reset the offset to the latest offset
none: throw exception to the consumer if no previous offset is found for the consumer's group
anything else: throw exception to the consumer
Note : auto.offset.reset to earliest will work only if kafka does not have offset for that consumer group (So in you case need to add this property with new consumer group and restart the application)
Related
I have a working code for kafkalistener to read messages from the beginning(offset=0) of a topic (always running).
For my use case (messaging) I need 2 things:
always catch new messages(this consumer is always running) of specific topic/partition and send to frontend websocket+stomp. (I already have this part)
start new consumer to get messages from beginning to current of specific topic/partition, only when frontend signals and then stop after that so that these data(loading previous messages for the late user or for later) can be fetched by frontend websocket+stomp (at the beginning of its session)
If I can dynamically (after getting signal from frontend) add/remove kafkaListener with parameters(data from post method) it will serve both
actually, how can I implement this? should I think of using post method to notify backend that I need to load previous messages of this topic/partition right now and send those it to this ".." url? but then how can I dynamically start and off that consumer(kafkaListener) without running all the time and pass parameter there?
Here is a quick Spring Boot application showing how to dynamically create containers.
#SpringBootApplication
public class So61950229Application {
public static void main(String[] args) {
SpringApplication.run(So61950229Application.class, args);
}
#Bean
public ApplicationRunner runner(DynamicListener listener, KafkaTemplate<String, String> template) {
return args -> {
IntStream.range(0, 10).forEach(i -> template.send("so61950229", "foo" + i));
System.out.println("Hit enter to start a listener");
System.in.read();
listener.newContainer("so61950229", 0);
System.out.println("Hit enter to start another listener");
System.in.read();
listener.newContainer("so61950229", 0);
};
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so61950229").partitions(1).replicas(1).build();
}
}
#Component
class DynamicListener {
private static final Logger LOG = LoggerFactory.getLogger(DynamicListener.class);
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
private final ConcurrentMap<String, AbstractMessageListenerContainer<String, String>> containers
= new ConcurrentHashMap<>();
DynamicListener(ConcurrentKafkaListenerContainerFactory<String, String> factory) {
this.factory = factory;
}
void newContainer(String topic, int partition) {
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topic, partition));
String groupId = UUID.randomUUID().toString();
container.getContainerProperties().setGroupId(groupId);
container.setupMessageListener((MessageListener) record -> {
System.out.println(record);
});
this.containers.put(groupId, container);
container.start();
}
#EventListener
public void idle(ListenerContainerIdleEvent event) {
AbstractMessageListenerContainer<String, String> container = this.containers.remove(
event.getContainer(ConcurrentMessageListenerContainer.class).getContainerProperties().getGroupId());
if (container != null) {
LOG.info("Stopping idle container");
container.stop(() -> LOG.info("Stopped"));
}
}
}
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.listener.idle-event-interval=5000
I have a spring boot service that consumes from a kafka topic. When i consume i perform certain tasks on the kafka message. Before i can perform these operations i need to wait for the service to load some data into caches that i have set up. My issue is if i set kafka consumer to autostart it starts consuming before the cache loads and it errors out.
I am trying to explicitly start the consumer after i load the cache however i get null pointer exceptions.
#Configuration
public class KafkaConfig {
#Value("${kafka.server}")
String server;
#Value("${kafka.port}")
String port;
#Value("${kafka.group.id}")
String groupid;
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, server+":"+port);
config.put(ConsumerConfig.GROUP_ID_CONFIG, groupid);
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
// config.put("security.protocol","SASL_PLAINTEXT");
// config.put("sasl.kerberos.service.name","kafka");
return new DefaultKafkaConsumerFactory<>(config);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.setAutoStartup(false);
return factory;
}
}
KafkaListener
#Service
public class KafkaConsumer {
#Autowired
AggregationService aggregationService;
#Autowired
private KafkaListenerEndpointRegistry registry;
private final CounterService counterService;
public KafkaConsumer(CounterService counterService) {
this.counterService = counterService;
}
#KafkaListener(topics = "gliTransactionTopic", group = "gliDecoupling", id = "gliKafkaListener")
public boolean consume(String message,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) Integer partition,
#Header(KafkaHeaders.OFFSET) Long offset) throws ParseException {
System.out.println("Inside kafka listener :" + message+" partition :"+partition.toString()+" offset :"+offset.toString());
aggregationService.run();
return true;
}
}
service To start stop
#Service
public class DecouplingController {
#Autowired
private KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
public void stop() {
MessageListenerContainer listenerContainer = kafkaListenerEndpointRegistry
.getListenerContainer("gliKafkaListener");
listenerContainer.stop();
}
public void start() {
MessageListenerContainer listenerContainer = kafkaListenerEndpointRegistry
.getListenerContainer("gliKafkaListener");
listenerContainer.start();
}
}
main method
#SpringBootApplication
public class DecouplingApplication {
Ignite ignite;
static IgniteCache<Long, MappingsEntity> mappingsCache;
public static void main(String[] args) {
SpringApplication.run(DecouplingApplication.class, args);
Ignition.setClientMode(true);
Ignite ignite = Ignition.ignite("ignite");
loadCaches(ignite);
}
public static boolean loadCaches(Ignite ignite) {
mappingsCache = ignite.getOrCreateCache("MappingsCache");
mappingsCache.loadCache(null);
System.out.println("Data Loaded");
DecouplingController dc=new DecouplingController();
dc.start();
return true;
}
}
Below is the exception
Data Loaded
Exception in thread "main" java.lang.NullPointerException
at com.ignite.spring.decoupling.controller.DecouplingController.start(DecouplingController.java:126)
at com.ignite.spring.decoupling.DecouplingApplication.loadCaches(DecouplingApplication.java:64)
at com.ignite.spring.decoupling.DecouplingApplication.main(DecouplingApplication.java:37)
Instead of manually creating an object of DecouplingController , autowire the dependency in DecouplingApplication.
#Autowired
DecouplingController deDecouplingController;
The ApplicationContext which deals with autowired dependencies is not aware about the object you manually created using "new". The autowired kafkaListenerEndpointRegistry is unknown to the new DecouplingController object you created.
Seems like the gliKafkaListener1 was not registred an some part of ConsumerConfig/ListenerConfig
I am using spring cloud stream version 1.1.2 to create /integrate consumer with microservice. I am setting auto-commit-offset consumer property to False so that i can receive acknowledgment header in Message and manually acknowledge messages once it is consumed successfully.
My concern is, if something fails during message consumption i will not send acknowledgment back to broker but when i can expect the same message re-delivered to consumer. Currently, i can verify re-delivery only if i restart server, how would it work when server is already up and running.
Consumer properties are set as
kafka:
bindings:
input:
consumer:
auto-commit-offset: false
reset-offsets: true
start-ofofset: earliest
You need to seek the offset before the message in you client.
The offset is kept persistent at the kafka service for your group and at your client in memory. The latter will be lost when the service is being restarted which is why you then again consume your message.
This can be solved by:
public class KafkaConsumer implements ConsumerSeekAware {
and
this.seekCallBack.get().seek(consumerRecord.topic(), consumerRecord.partition(), consumerRecord.offset());
I hope this helps you!
Complete consumer code:
public class KafkaConsumer implements ConsumerSeekAware {
private static final String USER_TOPIC = "user-topic";
private static final String USER_LISTENER_ID = "userListener";
private static final String STRING_LISTENER = "string-listener";
private final ThreadLocal<ConsumerSeekCallback> seekCallBack = new ThreadLocal<>();
private final KafkaListenerEndpointRegistry registry;
private final TaskScheduler scheduler;
private final LocalValidatorFactoryBean validatorFactory;
public KafkaConsumer(final KafkaListenerEndpointRegistry registry, final TaskScheduler scheduler, final LocalValidatorFactoryBean validatorFactory) {
this.registry = registry;
this.scheduler = scheduler;
this.validatorFactory = validatorFactory;
}
public void registerSeekCallback(ConsumerSeekAware.ConsumerSeekCallback callback) {
this.seekCallBack.set(callback);
}
#Override
public void onPartitionsAssigned(final Map<TopicPartition, Long> assignments, final ConsumerSeekCallback callback) {
}
#Override
public void onIdleContainer(final Map<TopicPartition, Long> assignments, final ConsumerSeekCallback callback) {
}
#KafkaListener(id = USER_LISTENER_ID, topics = USER_TOPIC, containerFactory = "userContainerFactory")
public void consumeJson(ConsumerRecord<String, User> consumerRecord, User user, final Acknowledgment acknowledgment) {
if (user.getName().equals("reject")) {
throw new IllegalStateException("Illegal user:" + user.getName());
}
if (!user.getName().equals("retry")) {
acknowledgment.acknowledge();
log.info("Consumed JSON Message: " + user);
} else {
log.info("Rejected: " + user);
this.seekCallBack.get().seek(consumerRecord.topic(), consumerRecord.partition(), consumerRecord.offset());
}
}
I wrote a JUnit test case to test the code in the "With Java Configuration" lesson in the Spring Kafka docs. (https://docs.spring.io/spring-kafka/reference/htmlsingle/#_with_java_configuration). The onedifference is that I am using an Embedded Kafka Server in the class, instead of a localhost server. I am using Spring Boot 2.0.2 and its Spring-Kafka dependency.
While running this test case, I see that the Consumer is not reading the message from the topic and the "assertTrue" check fails. There are no other errors.
#RunWith(SpringRunner.class)
public class SpringConfigSendReceiveMessage {
public static final String DEMO_TOPIC = "demo_topic";
#Autowired
private Listener listener;
#Test
public void testSimple() throws Exception {
template.send(DEMO_TOPIC, 0, "foo");
template.flush();
assertTrue(this.listener.latch.await(60, TimeUnit.SECONDS));
}
#Autowired
private KafkaTemplate<Integer, String> template;
#Configuration
#EnableKafka
public static class Config {
#Bean
public KafkaEmbedded kafkaEmbedded() {
return new KafkaEmbedded(1, true, 1, DEMO_TOPIC);
}
#Bean
public ConsumerFactory<Integer, String> createConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaEmbedded().getBrokersAsString());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group1");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(createConsumerFactory());
return factory;
}
#Bean
public Listener listener() {
return new Listener();
}
#Bean
public ProducerFactory<Integer, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaEmbedded().getBrokersAsString());
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
#Bean
public KafkaTemplate<Integer, String> kafkaTemplate() {
return new KafkaTemplate<Integer, String>(producerFactory());
}
}
}
class Listener {
public final CountDownLatch latch = new CountDownLatch(1);
#KafkaListener(id = "foo", topics = DEMO_TOPIC)
public void listen1(String foo) {
this.latch.countDown();
}
}
I think that this is because the #KafkaListener is using some wrong/default setting when reading from the topic. I dont see any errors in the logs.
Is this unit test case correct? How can i find the object that is created for the KafkaListener annotation and see which Kafka broker it consumes from? Any inputs will be helpful. Thanks.
The message is sent before the consumer starts.
By default, new consumers start consuming at the end of the topic.
Add
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
The answer by #gary-russell is the best solution. Another way to resolve this issue was to delay the message send step by some time. This will enable the consumer to be ready. The following is also a correct solution.
Lesson learned - For unit testing Kafka consumers, either consume all the messages in the test case, or ensure that the Consumer is ready before Producer sends the message.
#Test
public void testSimple() throws Exception {
Thread.sleep(1000);
template.send(DEMO_TOPIC, 0, "foo");
template.flush();
assertTrue(this.listener.latch.await(60, TimeUnit.SECONDS));
}
I config spring boot rabbit' dead letter queue, but ErrorHandler never receive any message. I search all the questiones about dead letter queue, but could not figure out. Can anyone help me ?
RabbitConfig.java to config dead letter queue/exchange:
#Configuration
public class RabbitConfig {
public final static String MAIL_QUEUE = "mail_queue";
public final static String DEAD_LETTER_EXCHANGE = "dead_letter_exchange";
public final static String DEAD_LETTER_QUEUE = "dead_letter_queue";
public static Map<String, Object> args = new HashMap<String, Object>();
static {
args.put("x-dead-letter-exchange", DEAD_LETTER_EXCHANGE);
//args.put("x-dead-letter-routing-key", DEAD_LETTER_QUEUE);
args.put("x-message-ttl", 5000);
}
#Bean
public Queue mailQueue() {
return new Queue(MAIL_QUEUE, true, false, false, args);
}
#Bean
public Queue deadLetterQueue() {
return new Queue(DEAD_LETTER_QUEUE, true);
}
#Bean
public FanoutExchange deadLetterExchange() {
return new FanoutExchange(DEAD_LETTER_EXCHANGE);
}
#Bean
public Binding deadLetterBinding() {
return BindingBuilder.bind(deadLetterQueue()).to(deadLetterExchange());
}
}
ErrorHandler.java to process DEAD LETTER QUEUE:
#Component
#RabbitListener( queues = RabbitConfig.DEAD_LETTER_QUEUE)
public class ErrorHandler {
#RabbitHandler
public void handleError(Object message) {
System.out.println("xxxxxxxxxxxxxxxxxx"+message);
}
}
MailServiceImpl.java to process MAIL_QUEUE:
#Service
#RabbitListener(queues = RabbitConfig.MAIL_QUEUE)
#ConditionalOnProperty("spring.mail.host")
public class MailServiceImpl implements MailService {
#Autowired
private JavaMailSender mailSender;
#RabbitHandler
#Override
public void sendMail(TMessageMail form) {
//......
try {
mailSender.save(form);
}catch(Exception e) {
logger.error("error in sending mail: {}", e.getMessage());
throw new AmqpRejectAndDontRequeueException(e.getMessage());
}
}
}
thx god, I finanlly find the answer!
all the configuration are correct, the problem is all the queues like mail_queue are created before I configure dead letter queue. So when I set x-dead-letter-exchange to the queue after the queue is created, it does not take effect.
中文就是,修改队列参数后,要删除队列重建!!!这么简单的一个tip,花了我几小时。。。。。。
How to delete queue, I follow the answer.
Deleting queues in RabbitMQ