Spring Kafka Consumer seek particular offset to read message - spring-boot

I created a SpringBootApplication to consume a message from Particular offset. But consumer poll method returning zero records. If I run application multiple times it should return same message each time from 108134L offset.
#Configuration
public class FlightEventListener {
#Bean
public void listenForMessage() throws Exception {
TopicPartition tp = new TopicPartition("topic-name", 0);
KafkaConsumer<String, Object> consumer = new KafkaConsumer<>(clusterOneProps);
try {
consumer.subscribe(Collections.singletonList("topic-name"), new ConsumerRebalanceListener() {
#Override
public void onPartitionsRevoked(Collection<TopicPartition> partitions) {
// TODO Auto-generated method stub
}
#Override
public void onPartitionsAssigned(Collection<TopicPartition> partitions) {
// TODO Auto-generated method stub
consumer.seek(tp, 108134L);
}
});
ConsumerRecords<String, Object> crs = consumer.poll(Duration.ofMillis(100L));
System.out.println(crs.count());
for (ConsumerRecord<String, Object> record : crs) {
System.out.println("consumer Record is >>>>"+record.offset());
System.out.println("consumer Record is >>>>"+record);
}
}catch(Exception e) {
e.printStackTrace();
}finally {
consumer.close();
}
================================================
Implemented ConsumerSeekAware. but method is not invoking. How to invoke the method. I am looking for method invocation during startup
#Configuration
public class MessageSeeker extends AbstractConsumerSeekAware {
#Autowired
private FlightEventKafkaConfiguration clusterOneConfig;
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
// logic
}

Related

Kafka is not assigning a partition after Consumer.Poll(Duration.ZERO);

i started a project where i implement appache kafka.
I already have a working producer that writes data into the queue. So far so good. Now i wanted to program an consumer that reads out all the data in the queue.
That is the corresponding code:
try {
consumer.subscribe(Collections.singletonList("names"));
if (startingPoint != null){
consumer.
consumer.poll(Duration.ofMillis(0));
consumer.seekToBeginning(consumer.assignment());
}
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(500));
for (ConsumerRecord<String, String> record : records) {
keyValuePairs.add(new String[]{record.key(),record.value()});
System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
}
} catch (Exception e) {
e.printStackTrace();
} finally {
consumer.close();
}
That code doesnt work right now like it is supposed to do. Only new records are consumed.
I was able to find out that
seekToBeginning() isn´t working because no partition is assigned to the consumer in that moment.
If i increase the duration of the poll it works. If i just pause the thread on the other hand it doesn´t.
Could someone please try to explain me why that is the case. I tried to find out by myself and already read something about a Kafka heartbeat. But i still haven´t fully understood what happens exactly.
The assignment takes time; polling for 0 will generally mean the poll will exit before it occurs.
You should add a ConsumerRebalanceListener callback to the subscribe() method and perform the seek in onPartitionsAssigned().
EDIT
#SpringBootApplication
public class So69121558Application {
public static void main(String[] args) {
SpringApplication.run(So69121558Application.class, args);
}
#Bean
public ApplicationRunner runner(ConsumerFactory<String, String> cf, KafkaTemplate<String, String> template) {
return args -> {
template.send("so69121558", "test");
Consumer<String, String> consumer = cf.createConsumer("group", "");
consumer.subscribe(Collections.singletonList("so69121558"), new ConsumerRebalanceListener() {
#Override
public void onPartitionsRevoked(Collection<TopicPartition> partitions) {
}
#Override
public void onPartitionsAssigned(Collection<TopicPartition> partitions) {
consumer.seekToBeginning(partitions);
}
});
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(5));
records.forEach(System.out::println);
Thread.sleep(5000);
consumer.close();
};
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so69121558").partitions(1).replicas(1).build();
}
}
Here are a couple of examples of doing it the Spring way - just add one of these (or both) to the above class.
#KafkaListener(id = "so69121558", topics = "so69121558")
void listen(ConsumerRecord<?, ?> rec) {
System.out.println(rec);
}
#KafkaListener(id = "so69121558-1", topics = "so69121558")
void pojoListen(String in) {
System.out.println(in);
}
The seeks are done a bit differently too; here's the complete example:
#SpringBootApplication
public class So69121558Application extends AbstractConsumerSeekAware {
public static void main(String[] args) {
SpringApplication.run(So69121558Application.class, args);
}
#KafkaListener(id = "so69121558", topics = "so69121558")
void listen(ConsumerRecord<?, ?> rec) {
System.out.println(rec);
}
#KafkaListener(id = "so69121558-1", topics = "so69121558")
void pojoListen(String in) {
System.out.println(in);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so69121558").partitions(1).replicas(1).build();
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
callback.seekToBeginning(assignments.keySet());
}
}

How to build a nonblocking Consumer when using AsyncRabbitTemplate with Request/Reply Pattern

I'm new to rabbitmq and currently trying to implement a nonblocking producer with a nonblocking consumer. I've build some test producer where I played around with typereference:
#Service
public class Producer {
#Autowired
private AsyncRabbitTemplate asyncRabbitTemplate;
public <T extends RequestEvent<S>, S> RabbitConverterFuture<S> asyncSendEventAndReceive(final T event) {
return asyncRabbitTemplate.convertSendAndReceiveAsType(QueueConfig.EXCHANGE_NAME, event.getRoutingKey(), event, event.getResponseTypeReference());
}
}
And in some other place the test function that gets called in a RestController
#Autowired
Producer producer;
public void test() throws InterruptedException, ExecutionException {
TestEvent requestEvent = new TestEvent("SOMEDATA");
RabbitConverterFuture<TestResponse> reply = producer.asyncSendEventAndReceive(requestEvent);
log.info("Hello! The Reply is: {}", reply.get());
}
This so far was pretty straightforward, where I'm stuck now is how to create a consumer which is non-blocking too. My current listener:
#RabbitListener(queues = QueueConfig.QUEUENAME)
public TestResponse onReceive(TestEvent event) {
Future<TestResponse> replyLater = proccessDataLater(event.getSomeData())
return replyLater.get();
}
As far as I'm aware, when using #RabbitListener this listener runs in its own thread. And I could configure the MessageListener to use more then one thread for the active listeners. Because of that, blocking the listener thread with future.get() is not blocking the application itself. Still there might be the case where all threads are blocking now and new events are stuck in the queue, when they maybe dont need to. What I would like to do is to just receive the event without the need to instantly return the result. Which is probably not possible with #RabbitListener. Something like:
#RabbitListener(queues = QueueConfig.QUEUENAME)
public void onReceive(TestEvent event) {
/*
* Some fictional RabbitMQ API call where i get a ReplyContainer which contains
* the CorrelationID for the event. I can call replyContainer.reply(testResponse) later
* in the code without blocking the listener thread
*/
ReplyContainer replyContainer = AsyncRabbitTemplate.getReplyContainer()
// ProcessDataLater calls reply on the container when done with its action
proccessDataLater(event.getSomeData(), replyContainer);
}
What is the best way to implement such behaviour with rabbitmq in spring?
EDIT Config Class:
#Configuration
#EnableRabbit
public class RabbitMQConfig implements RabbitListenerConfigurer {
public static final String topicExchangeName = "exchange";
#Bean
TopicExchange exchange() {
return new TopicExchange(topicExchangeName);
}
#Bean
public ConnectionFactory rabbitConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost("localhost");
return connectionFactory;
}
#Bean
public MappingJackson2MessageConverter consumerJackson2MessageConverter() {
return new MappingJackson2MessageConverter();
}
#Bean
public RabbitTemplate rabbitTemplate() {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(rabbitConnectionFactory());
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
return rabbitTemplate;
}
#Bean
public AsyncRabbitTemplate asyncRabbitTemplate() {
return new AsyncRabbitTemplate(rabbitTemplate());
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
#Bean
Queue queue() {
return new Queue("test", false);
}
#Bean
Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("foo.#");
}
#Bean
public SimpleRabbitListenerContainerFactory myRabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory());
factory.setMaxConcurrentConsumers(5);
factory.setMessageConverter(producerJackson2MessageConverter());
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return factory;
}
#Override
public void configureRabbitListeners(final RabbitListenerEndpointRegistrar registrar) {
registrar.setContainerFactory(myRabbitListenerContainerFactory());
}
}
I don't have time to test it right now, but something like this should work; presumably you don't want to lose messages so you need to set the ackMode to MANUAL and do the acks yourself (as shown).
UPDATE
#SpringBootApplication
public class So52173111Application {
private final ExecutorService exec = Executors.newCachedThreadPool();
#Autowired
private RabbitTemplate template;
#Bean
public ApplicationRunner runner(AsyncRabbitTemplate asyncTemplate) {
return args -> {
RabbitConverterFuture<Object> future = asyncTemplate.convertSendAndReceive("foo", "test");
future.addCallback(r -> {
System.out.println("Reply: " + r);
}, t -> {
t.printStackTrace();
});
};
}
#Bean
public AsyncRabbitTemplate asyncTemplate(RabbitTemplate template) {
return new AsyncRabbitTemplate(template);
}
#RabbitListener(queues = "foo")
public void listen(String in, Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) long tag,
#Header(AmqpHeaders.CORRELATION_ID) String correlationId,
#Header(AmqpHeaders.REPLY_TO) String replyTo) {
ListenableFuture<String> future = handleInput(in);
future.addCallback(result -> {
Address address = new Address(replyTo);
this.template.convertAndSend(address.getExchangeName(), address.getRoutingKey(), result, m -> {
m.getMessageProperties().setCorrelationId(correlationId);
return m;
});
try {
channel.basicAck(tag, false);
}
catch (IOException e) {
e.printStackTrace();
}
}, t -> {
t.printStackTrace();
});
}
private ListenableFuture<String> handleInput(String in) {
SettableListenableFuture<String> future = new SettableListenableFuture<String>();
exec.execute(() -> {
try {
Thread.sleep(2000);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
future.set(in.toUpperCase());
});
return future;
}
public static void main(String[] args) {
SpringApplication.run(So52173111Application.class, args);
}
}

Type mismatch: cannot convert from String to ListenableFuture<String>

I'm trying to implementing non-blocking call. in spring 4, But unfortunately it's throwing the below error.
Type mismatch: cannot convert from String to ListenableFuture
and also same error can not able convert from Map to ListenableFuture>.
My Method call stack is as below.
ListenableFuture<Map<String,String>> unusedQuota = doLogin(userIdentity,request,"0");
doLogin login simply return Map
is there any converter required?
what changes would be required ?
Thanks.
public class MyController {
final DeferredResult<Map<String,String>> deferredResult = new DeferredResult<Map<String,String>>(5000l);
private final Logger log = LoggerFactory.getLogger(MyController.class);
#Inject
RestTemplate restTemplate;
#RequestMapping(value = "/loginservice", method = RequestMethod.GET)
#Timed
public DeferredResult<Map<String,String>> loginRequestService(#RequestParam String userIdentity,HttpServletRequest request) throws Exception {
deferredResult.onTimeout(new Runnable() {
#Override
public void run() { // Retry on timeout
deferredResult.setErrorResult(ResponseEntity.status(HttpStatus.REQUEST_TIMEOUT).body("Request timeout occurred."));
}
});
#SuppressWarnings("unchecked")
ListenableFuture<Map<String,String>> unusedQuota = doLogin(userIdentity,request);
unusedQuota.addCallback(new ListenableFutureCallback<Map<String,String>>() {
#SuppressWarnings("unchecked")
#Override
public void onSuccess(Map<String, String> result) {
// TODO Auto-generated method stub
deferredResult.setResult((Map<String, String>) ResponseEntity.ok(result));
}
#Override
public void onFailure(Throwable t) {
// TODO Auto-generated method stub
deferredResult.setErrorResult(ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(t));
}
});
return deferredResult;
}
private Map<String,String> doLogin(String userIdentity,HttpServletRequest request) throws Exception{
Map<String,String> unusedQuota=new HashMap<String,String>();
unusedQuota.put("quota", "100");
return unusedQuota;
}
}
}
You are NOT passing the Map object when there is an exception which is causing the issue, so your controller method needs to be changed as shown below, also move deferredResult object inside the Controller method as you should share the same instance of deferredResult for different user request.
public class MyController {
#Autowired
private TaskExecutor asyncTaskExecutor;
#RequestMapping(value = "/loginservice", method = RequestMethod.GET)
#Timed
public DeferredResult<Map<String,String>> loginRequestService(#RequestParam String userIdentity,HttpServletRequest request) throws Exception {
final DeferredResult<Map<String,String>> deferredResult = new DeferredResult<Map<String,String>>(5000l);
deferredResult.onTimeout(new Runnable() {
#Override
public void run() { // Retry on timeout
Map<String, String> map = new HashMap<>();
//Populate map object with error details with Request timeout occurred.
deferredResult.setErrorResult(new ResponseEntity
<Map<String, String>>(map, null,
HttpStatus.REQUEST_TIMEOUT));
}
});
ListenableFuture<String> task = asyncTaskExecutor.submitListenable(new Callable<String>(){
#Override
public Map<String,String> call() throws Exception {
return doLogin(userIdentity,request);
}
});
unusedQuota.addCallback(new ListenableFutureCallback<Map<String,String>>() {
#SuppressWarnings("unchecked")
#Override
public void onSuccess(Map<String, String> result) {
// TODO Auto-generated method stub
deferredResult.setResult((Map<String, String>) ResponseEntity.ok(result));
}
#Override
public void onFailure(Throwable t) {
Map<String, String> map = new HashMap<>();
//Populate map object with error details
deferredResult.setErrorResult(new ResponseEntity<Map<String, String>>(
map, null, HttpStatus.INTERNAL_SERVER_ERROR));
}
});
return deferredResult;
}
}
Also, you need to ensure that you are configuring the ThreadPoolTaskExecutor as explained in the example here.

Spring Batch: pass data between reader and writer

I would like to get data in the Writer that I've set in the Reader of my step. I know about ExecutionContexts (step and job) and about ExecutionContextPromotionListener via http://docs.spring.io/spring-batch/trunk/reference/html/patterns.html#passingDataToFutureSteps
The problem is that in Writer I'm retrieving a null value of 'npag'.
Line on ItemWriter:
LOG.info("INSIDE WRITE, NPAG: " + nPag);
I've being doing some workarounds without luck, looking answer for other similar questions... Any help? thanks!
Here's my code:
READER
#Component
public class LCItemReader implements ItemReader<String> {
private StepExecution stepExecution;
private int nPag = 1;
#Override
public String read() throws CustomItemReaderException {
ExecutionContext stepContext = this.stepExecution.getExecutionContext();
stepContext.put("npag", nPag);
nPag++;
return "content";
}
#BeforeStep
public void saveStepExecution(StepExecution stepExecution) {
this.stepExecution = stepExecution;
}
}
WRITER
#Component
#StepScope
public class LCItemWriter implements ItemWriter<String> {
private String nPag;
#Override
public void write(List<? extends String> continguts) throws Exception {
try {
LOG.info("INSIDE WRITE, NPAG: " + nPag);
} catch (Throwable ex) {
LOG.error("Error: " + ex.getMessage());
}
}
#BeforeStep
public void retrieveInterstepData(StepExecution stepExecution) {
JobExecution jobExecution = stepExecution.getJobExecution();
ExecutionContext jobContext = jobExecution.getExecutionContext();
this.nPag = jobContext.get("npag").toString();
}
}
JOB/STEP BATCH CONFIG
#Bean
public Job lCJob() {
return jobs.get("lCJob")
.listener(jobListener)
.start(lCStep())
.build();
}
#Bean
public Step lCStep() {
return steps.get("lCStep")
.<String, String>chunk(1)
.reader(lCItemReader)
.processor(lCProcessor)
.writer(lCItemWriter)
.listener(promotionListener())
.build();
}
LISTENER
#Bean
public ExecutionContextPromotionListener promotionListener() {
ExecutionContextPromotionListener executionContextPromotionListener = new ExecutionContextPromotionListener();
executionContextPromotionListener.setKeys(new String[]{"npag"});
return executionContextPromotionListener;
}
The ExecutionContextPromotionListener specifically states that it works at the end of a step so that would be after the writer executes. So the promotion I think you are counting on does not occur when you think it does.
If i were you I would set it in the step context and get it from the step if you need the value with in a single step. Otherwise I would set it to the job context.
The other aspect is the #BeforeStep. That marks a method for executing before the step context exists. The way you are setting the nPag value in the reader would be after the step had started executing.
You are trying to read the value for nPag even before it is set in the reader, ending up with a default value which is null. You need to read the value on nPag at the time of logging from the execution context directly. You can keep a reference to the jobContext. Try this
#Component
#StepScope
public class LCItemWriter implements ItemWriter<String> {
private String nPag;
private ExecutionContext jobContext;
#Override
public void write(List<? extends String> continguts) throws Exception {
try {
this.nPag = jobContext.get("npag").toString();
LOG.info("INSIDE WRITE, NPAG: " + nPag);
} catch (Throwable ex) {
LOG.error("Error: " + ex.getMessage());
}
}
#BeforeStep
public void retrieveInterstepData(StepExecution stepExecution) {
JobExecution jobExecution = stepExecution.getJobExecution();
jobContext = jobExecution.getExecutionContext();
}
}
In your Reader and Writer you need to implement ItemStream interface and use ExecutionContext as member variable.Here i have given example with Processor instead of Writer but same is applicable for Writer as well .Its working fine for me and i am able to take values from reader to processor.
I have set the value in context in reader and getting the value in processor.
public class EmployeeItemReader implements ItemReader<Employee>, ItemStream {
ExecutionContext context;
#Override
public Employee read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
context.put("ajay", "i am going well");
Employee emp=new Employee();
emp.setEmpId(1);
emp.setFirstName("ajay");
emp.setLastName("goswami");
return emp;
}
#Override
public void close() throws ItemStreamException {
// TODO Auto-generated method stub
}
#Override
public void open(ExecutionContext arg0) throws ItemStreamException {
context = arg0;
}
#Override
public void update(ExecutionContext arg0) throws ItemStreamException {
// TODO Auto-generated method stub
context = arg0;
}
}
My processor
public class CustomItemProcessor implements ItemProcessor<Employee,ActiveEmployee>,ItemStream{
ExecutionContext context;
#Override
public ActiveEmployee process(Employee emp) throws Exception {
//See this line
System.out.println(context.get("ajay"));
ActiveEmployee actEmp=new ActiveEmployee();
actEmp.setEmpId(emp.getEmpId());
actEmp.setFirstName(emp.getFirstName());
actEmp.setLastName(emp.getLastName());
actEmp.setAdditionalInfo("Employee is processed");
return actEmp;
}
#Override
public void close() throws ItemStreamException {
// TODO Auto-generated method stub
}
#Override
public void open(ExecutionContext arg0) throws ItemStreamException {
// TODO Auto-generated method stub
}
#Override
public void update(ExecutionContext arg0) throws ItemStreamException {
context = arg0;
}
}
Hope this helps.

Spring Batch how to regroup/aggregate user datas into a single object

I am trying to transform user operations (like purshases) into a user summary class (expenses by user). A user can have multiple operations but only one summary. I cannot sum purshases in the reader because I need a processor to reject some operation depending to another service.
So some code :
class UserOperation {
String userId;
Integer price;
}
class UserSummary {
String userId;
Long sum;
}
#Bean
public Step retrieveOobClientStep1(StepBuilderFactory stepBuilderFactory, ItemReader<UserOperation> userInformationJdbcCursorItemReader, ItemProcessor<UserOperation, UserSummary> userInformationsProcessor, ItemWriter<UserSummary> flatFileWriter) {
return stepBuilderFactory.get("Step1").<UserOperation, UserSummary>chunk(100) // chunck result that need to be aggregated... not good
.reader(userInformationJdbcCursorItemReader) // read all user operations from DB
.processor(userInformationsProcessor) // I need to reject or not some operations - but here 1 operation = 1 summary that is not good
.writer(flatFileWriter) // write result into flat file
.build();
}
I thing that ItemReader/ItemProcessor/ItemWriter is for single item processing.
But how to regroup multiples records into a single object using Spring Batch ? only Tasklet ?
Possibility but cause problems with small commit interval :
public class UserSummaryAggregatorItemStreamWriter implements ItemStreamWriter<UserSummary>, InitializingBean {
private ItemStreamWriter<UserSummary> delegate;
#Override
public void afterPropertiesSet() throws Exception {
Assert.notNull(delegate, "'delegate' may not be null.");
}
public void setDelegate(ItemStreamWriter<UserSummary> delegate) {
this.delegate = delegate;
}
#Override
public void write(List<? extends UserSummary> items) throws Exception {
Map<String, UserSummary> userSummaryMap = new HashMap<String, UserSummary>();
// Aggregate
for (UserSummary item : items) {
UserSummary savedUserSummary = userSummaryMap.get(item.getUserId());
if (savedUserSummary != null) {
savedUserSummary.incrementSum(item.getSum()); // sum
} else {
savedUserSummary = item;
}
userSummaryMap.put(item.getSubscriptionCode(), savedUserSummary);
}
Collection<UserSummary> values = userSummaryMap.values();
if(values != null) {
delegate.write(new ArrayList<UserSummary>(values));
}
}
#Override
public void open(ExecutionContext executionContext) throws ItemStreamException {
delegate.open(executionContext);
}
#Override
public void update(ExecutionContext executionContext) throws ItemStreamException {
delegate.update(executionContext);
}
#Override
public void close() throws ItemStreamException {
delegate.close();
}
}

Resources