Use micrometer timer with manual counter increment - spring-boot

I have a KafkaListener which receives messages containing a list of objects.
#KafkaListener(
id = "dataConsumer",
topics = "data.topic",
groupId = "${spring.kafka.consumer.group-id}",
containerFactory = "dataKafkaListenerContainerFactory")
public void consumeData(DataContainer message) {
List<Data> data = message.getList();
...
}
The list of objects can vary in size so the metrics for each message may not be useful.
I can get the timer metrics for this method by going to /actuator/metrics/spring.kafka.listener?tag=name:dataConsumer-0 but the count is for the message not the list of elements in the message. How can I switch this metric or make a similar metric for the time and count of the data elements in the message?

You can register your own Meter with the MeterRegistry - refer to the Micrometer Documentation.

Related

Spring RabbitMQ Retry policy only on a specific listener

I would like to have a Retry Policy only on a specific listener that listen to a specific queue (DLQ in the specific case).
#RabbitListener(queues = "my_queue_dlq", concurrency = "5")
public void listenDLQ(Message dlqMessage) {
// here implement logic for resend message to original queue (my_queue) for n times with a certain interval, and after n times... push to the Parking Lot Queue
}
BUT if I am not misunderstood when I specify a Retry Policy (for ex. on the application.yaml) all #RabbitListeners use it.
I suppose the only way would be to create a dedicated container factory, but this last one would be identical to the default one with ONLY one more Retry policy ... and it doesn't seem to me like the best to do so.
Something like that :
#RabbitListener(containerFactory = "container-factory-with-retrypolicy", queues = "myDLQ", concurrency = "5")
public void listenDLQ(Message dlqMessage) {
// here implement logic for resend message to original queues for n times with a certain interval
}
Do you see alternatives ?
Thank you in advance.
The ListenerContainer instances are registered to the RabbitListenerEndpointRegistry. You can obtain a desired one by the #RabbitListener(id) value. There you can get access to the MessageListener (casting to the AbstractAdaptableMessageListener) and set its retryTemplate property.
Or you can implement a ContainerCustomizer<AbstractMessageListenerContainer>, check its getListenerId() and do the same manipulation against its getMessageListener().

#KafkaListener per specific header value

I have #KafkaListener:
#KafkaListener(topicPattern = "SameTopic")
public void onMessage(Message<String> message, Acknowledgment acknowledgment) {
String eventType = new String((byte[]) message.getHeaders().get("Event-Type"), StandardCharsets.UTF_8);
switch (eventType) {
case "create" -> doCreate(message);
case "update" -> doUpdate(message);
case "delete" -> doDelete(message);
}
}
Producer sets custom header Event-Type with three possible values: create, update, delete. Currently I'm reading this header value from Message and then invoke rest of the logic according to the header value.
Is there any way to create three #KafkaListeners where each of them will consume message filtered by some criteria - for my case filtered by header Event-Type value?
#KafkaListener(topicPattern = "SameTopic", ...)
public void onCreate(Message<String> message, Acknowledgment acknowledgment) {
doCreate(message);
}
#KafkaListener(topicPattern = "SameTopic", ...)
public void onUpdate(Message<String> message, Acknowledgment acknowledgment) {
doUpdate(message);
}
#KafkaListener(topicPattern = "SameTopic", ...)
public void onDelete(Message<String> message, Acknowledgment acknowledgment) {
doDelete(message);
}
I'm aware of RecordFilterStrategy, but couldn't get any help of it.
Consider to have those types mapped to the partition on the topic.
This way you definitely can have different #KafkaListener with the specific partition assigned:
/**
* The topicPartitions for this listener when using manual topic/partition
* assignment.
* <p>
* Mutually exclusive with {#link #topicPattern()} and {#link #topics()}.
* #return the topic names or expressions (SpEL) to listen to.
*/
TopicPartition[] topicPartitions() default {};
The doc is here: https://docs.spring.io/spring-kafka/docs/current/reference/html/#manual-assignment
It's probably not going to work well with several instances of your app, since with manual assignment there is no consumer group involved. You may consider to refine the logic to 3 different topics. Or if that is not possible from produce side, use Kafka Streams to split() the original topic to other topics according the record key.

Pick next message after previous fully processed

I'm stucked with that kind of a problem. I use kafka as transport between services. Tried to draw sequence diagram
First of all planning service get main task and handling it, planning service pass it to few services then. My main problem is: I musn't pick another main task, until f.e. second service send result to kafka and planning service will process the result.
My main listener have this structure
#KafkaListener(
containerFactory = "genFactory",
topics = "${main}")
public void listenStartGeneratorTopic( GeneratorMessage message, Acknowledgment acknowledgment){
//do some logic
//THEN send message to first service, and then in that listener new task sends to second
sendTaskToQueue(task);
acknowledgment.acknowledge();
log.info("All done in method");
}
As I understood, I need aknowledge() after all my logic with result from second service will be done. So I tried to add boolean flag in CompletableFuture, setting it in true when my planning service get response from second service. And do blocking get() in main listener to continue after.
private CompletableFuture<Boolean> isMessageProcessed = new CompletableFuture<>();
#KafkaListener(topics = "${report}")
public void listenReport(ReportMessage reportMessage) {
isMessageProcessed = CompletableFuture.completedFuture(true);
}
}
#KafkaListener(
containerFactory = "genFactory",
topics = "${main}")
public void listenStartGeneratorTopic( GeneratorMessage message, Acknowledgment acknowledgment){
//do some logic
//THEN send message to first service, and then in that listener new task sends to second
sendTaskToQueue(task);
isMessageProcessed.join();
log.info("message is ready for commit");
acknowledgment.acknowledge();
}
That's looks strange enough and that idea doesn't bring me result.
So, can you give me advice, what can I do in that situation?
Why not using 6 topics? I believe this is better separation of duties and might allow you better scale,
Guess I would check KStream as well in your case...
My idea goes like this:
PLANNING SERVICE read from topic1.start do work send to topic2 ,
FIRST SERVICE read from topic2 do work and send to topic3
PLANNING SERVICE (another instance) read from topic3 do work and write to topic4
SECOND SERVICE reads topic4 do work send to topic5
PLANNING SERVICE (another instance) read from topic5 and write to topic6.done

Consuming from Camel queue every x minutes

Attempting to implement a way to time my consumer to receive messages from a queue every 30 minutes or so.
For context, I have 20 messages in my error queue until x minutes have passed, then my route consumes all messages on queue and proceeds to 'sleep' until another 30 minutes has passed.
Not sure the best way to implement this, I've tried spring #Scheduled, camel timer, etc and none of it is doing what I'm hoping for. I've been trying to get this to work with route policy but no dice in the correct functionality. It just seems to immediately consume from queue.
Is route policy the correct path or is there something else to use?
The route that reads from the queue will always read any message as quickly as it can.
One thing you could do is start / stop or suspend the route that consumes the messages, so have this sort of set up:
route 1: error_q_reader, which goes from('jms').
route 2: a timed route that fires every 20 mins
route 2 can use a control bus component to start the route.
from('timer?20mins') // or whatever the timer syntax is...
.to("controlbus:route?routeId=route1&action=start")
The tricky part here is knowing when to stop the route. Do you leave it run for 5 mins? Do you want to stop it once the messages are all consumed? There's probably a way to run another route that can check the queue depth (say every 1 min or so), and if it's 0 then shutdown route 1, you might get it to work, but I can assure you this will get messy as you try to deal with a number of async operations.
You could also try something more exotic, like a custom QueueBrowseStrategy which can fire an event to shutdown route 1 when there are no messages on the queue.
I built a customer bean to drain a queue and close, but it's not a very elegant solution, and I'd love to find a better one.
public class TriggeredPollingConsumer {
private ConsumerTemplate consumer;
private Endpoint consumerEndpoint;
private String endpointUri;
private ProducerTemplate producer;
private static final Logger logger = Logger.getLogger( TriggeredPollingConsumer.class );
public TriggeredPollingConsumer() {};
public TriggeredPollingConsumer( ConsumerTemplate consumer, String endpoint, ProducerTemplate producer ) {
this.consumer = consumer;
this.endpointUri = endpoint;
this.producer = producer;
}
public void setConsumer( ConsumerTemplate consumer) {
this.consumer = consumer;
}
public void setProducer( ProducerTemplate producer ) {
this.producer = producer;
}
public void setConsumerEndpoint( Endpoint endpoint ) {
consumerEndpoint = endpoint;
}
public void pollConsumer() throws Exception {
long count = 0;
try {
if ( consumerEndpoint == null ) consumerEndpoint = consumer.getCamelContext().getEndpoint( endpointUri );
logger.debug( "Consuming: " + consumerEndpoint.getEndpointUri() );
consumer.start();
producer.start();
while ( true ) {
logger.trace("Awaiting message: " + ++count );
Exchange exchange = consumer.receive( consumerEndpoint, 60000 );
if ( exchange == null ) break;
logger.trace("Processing message: " + count );
producer.send( exchange );
consumer.doneUoW( exchange );
logger.trace("Processed message: " + count );
}
producer.stop();
consumer.stop();
logger.debug( "Consumed " + (count - 1) + " message" + ( count == 2 ? "." : "s." ) );
} catch ( Throwable t ) {
logger.error("Something went wrong!", t );
throw t;
}
}
}
You configure the bean, and then call the bean method from your timer, and configure a direct route to process the entries from the queue.
from("timer:...")
.beanRef("consumerBean", "pollConsumer");
from("direct:myRoute")
.to(...);
It will then read everything in the queue, and stop as soon as no entries arrive within a minute. You might want to reduce the minute, but I found a second meant that if JMS as a bit slow, it would time out halfway through draining the queue.
I've also been looking at the sjms-batch component, and how it might be used with with a pollEnrich pattern, but so far I haven't been able to get that to work.
I have solved that by using my application as a CronJob in a MicroServices approach, and to give it the power of gracefully shutting itself down, we may set the property camel.springboot.duration-max-idle-seconds. Thus, your JMS consumer route keeps simple.
Another approach would be to declare a route to control the "lifecycle" (start, sleep and resume) of your JMS consumer route.
I would strongly suggest that you use the first approach.
If you use ActiveMQ you can leverage the Scheduler feature of it.
You can delay the delivery of a message on the broker by simply set the JMS property AMQ_SCHEDULED_DELAY to the number of milliseconds of the delay. Very easy in the Camel route
.setHeader("AMQ_SCHEDULED_DELAY", 60000)
It is not exactly what you look for because it does not drain a queue every 30 minutes, but instead delays every individual message for 30 minutes.
Notice that you have to enable the schedulerSupport in your broker configuration. Otherwise the delay properties are ignored.
<broker brokerName="localhost" dataDirectory="${activemq.data}" schedulerSupport="true">
...
</broker>
You should consider Aggregation EIP
from(URI_WAITING_QUEUE)
.aggregate(new GroupedExchangeAggregationStrategy())
.constant(true)
.completionInterval(TIMEOUT)
.to(URI_PROCESSING_BATCH_OF_EXCEPTIONS);
This example describes the following rules: all incoming in URI_WAITING_QUEUE objects will be grouped into List. constant(true) is a grouping condition (wihout any). And every TIMEOUT period (in millis) all grouped objects will be passed into URI_PROCESSING_BATCH_OF_EXCEPTIONS queue.
So the URI_PROCESSING_BATCH_OF_EXCEPTIONS queue will deal with List of objects to process. You can apply Split EIP to split them and to process one by one.

(Twitter) Storm's Window On Aggregation

I'm playing around with Storm, and I'm wondering where Storm specifies (if possible) the (tumbling/sliding) window size upon an aggregation. E.g. If we want to find the trending topics for the previous hour on Twitter. How do we specify that a bolt should return results for every hour? Is this done programatically inside each bolt? Or is it some way to specify a "window" ?
Disclaimer: I wrote the Trending Topics with Storm article referenced by gakhov in his answer above.
I'd say the best practice is to use the so-called tick tuples in Storm 0.8+. With these you can configure your own spouts/bolts to be notified at certain time intervals (say, every ten seconds or every minute).
Here's a simple example that configures the component in question to receive tick tuples every ten seconds:
// in your spout/bolt
#Override
public Map<String, Object> getComponentConfiguration() {
Config conf = new Config();
int tickFrequencyInSeconds = 10;
conf.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, tickFrequencyInSeconds);
return conf;
}
You can then use a conditional switch in your spout/bolt's execute() method to distinguish "normal" incoming tuples from the special tick tuples. For instance:
// in your spout/bolt
#Override
public void execute(Tuple tuple) {
if (isTickTuple(tuple)) {
// now you can trigger e.g. a periodic activity
}
else {
// do something with the normal tuple
}
}
private static boolean isTickTuple(Tuple tuple) {
return tuple.getSourceComponent().equals(Constants.SYSTEM_COMPONENT_ID)
&& tuple.getSourceStreamId().equals(Constants.SYSTEM_TICK_STREAM_ID);
}
Again, I wrote a pretty detailed blog post about doing this in Storm a few days ago as gakhov pointed out (shameless plug!).
Add a new spout with parallelism degree of 1, and have it emit an empty signal and then Utils.sleep until next time (all done in nextTuple). Then, link all relevant bolts to that spout using all-grouping, so all of their instances will receive that same signal.

Resources