How to access JMS statistics in Spring Boot? - spring

I understand that JMS has no statistics specs, so there is no standard way of reading things like "count of messages processed", "average time in queue" etc.
I'm looking at two approaches:
Access the ActiveMQ statistics directly
Maintain statistics in the JMS message consumer
With (1) I'm not finding examples how to get thos stats using Spring Boot. With (2), I'm wondering if the consumer itself needs to maintain the statistics, or if there's a better way.
Does anyone have any working examples?

For the record, I ended up implementing a broker-specific solution (here: ActiveMQ)
import org.springframework.stereotype.Component
import javax.jms.*
typealias QueueName = String
#Component
class BrokerFacade(private val connectionFactory: ConnectionFactory) {
private val statisticsBrokers = mutableMapOf<QueueName, StatisticsBrokerAccess>()
#Throws(JMSException::class)
fun getStatistics(queueName: QueueName): QueueStatistics? {
val brokerAccess = statisticsBrokers.getOrPut(queueName, { StatisticsBrokerAccess(queueName) })
return brokerAccess.getCurrentStatistics()?.let {
QueueStatistics(
queueName,
it.getLong("size"),
it.getLong("dequeueCount"),
it.getDouble("minEnqueueTime"),
it.getDouble("maxEnqueueTime"),
it.getDouble("averageEnqueueTime"),
it.getLong("memoryUsage"),
it.getLong("memoryPercentUsage")
)
}
}
inner class StatisticsBrokerAccess(queueName: QueueName) {
private val statisticsMessageConsumer: MessageConsumer;
private val statisticsMessageProducer: MessageProducer;
private val statisticsMessage: Message;
init {
val connection = connectionFactory.createConnection()
connection.start()
val session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE)
val statisticsReplyQueue = session.createTemporaryQueue()
statisticsMessageConsumer = session.createConsumer(statisticsReplyQueue)
val statisticsQueue = session.createQueue("ActiveMQ.Statistics.Destination.$queueName")
statisticsMessageProducer = session.createProducer(statisticsQueue)
statisticsMessage = session.createMessage()
statisticsMessage.setJMSReplyTo(statisticsReplyQueue)
}
fun getCurrentStatistics(): MapMessage? {
statisticsMessageProducer.send(statisticsMessage)
return statisticsMessageConsumer.receive(2000) as MapMessage?
}
}
}
QueueStatistics is a data class holding the statistic values.

If you are using JMS with Spring integration, its system management features provide statistics that are registered as micrometer metrics in the spring boot app.
https://docs.spring.io/spring-integration/docs/5.1.7.RELEASE/reference/html/#system-management-chapter
https://docs.spring.io/spring-boot/docs/2.1.7.RELEASE/reference/html/boot-features-integration.html
https://docs.spring.io/spring-boot/docs/2.1.7.RELEASE/reference/html/production-ready-metrics.html

Related

aggregator spring cloud stream with timeout

I want to make an application that receives messages, stores those messages in a list, and later with and schedule releases those messages every x amount of time.
I know spring cloud stream has an aggregator that already does this, but I think I need it to be done manually because I need to keep a unique message based upon a key and only replace the old message if it matches a specific condition ( I think of it as a Set aggregator with conditions)
what I have tried so far.
also in this link https://github.com/chalimbu/AggregatorQuestionStack
Processor.
import org.springframework.cloud.stream.annotation.EnableBinding
import org.springframework.cloud.stream.annotation.Input
import org.springframework.cloud.stream.annotation.Output
import org.springframework.cloud.stream.messaging.Processor
import org.springframework.scheduling.annotation.Scheduled
#EnableBinding(Processor::class)
class SetAggregatorProcessor(val storageService: StorageService) {
#Input
public fun inputMessage(input: Map<String,Any>){
storageService.messages.add(input)
}
#Output
#Scheduled(fixedDelay = 20000)
public fun produceOutput():List<Map<String,Any>>{
val message= storageService.messages
storageService.messages.clear()
return message;
}
}
Memory storage.
import org.springframework.stereotype.Service
#Service
class StorageService {
public var messages: MutableList<Map<String,Any>> = mutableListOf()
}
This code generates the following error when I start pushing messages.
Caused by: org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:139) ~[spring-integration-core-5.5.8.jar:5.5.8]
The idea is to deploy this app as part of the spring cloud stream (dataflow) platform.
I prefer the declarative approach(over the functional approach), but if somebody knows how to do it with the reactor way, I could settle for it.
Thanks for any help or advice.
thanks to this example(https://github.com/spring-cloud/spring-cloud-stream-samples/blob/main/processor-samples/sensor-average-reactive-kafka/src/main/java/sample/sensor/average/SensorAverageProcessorApplication.java) I was able to figure something out using flux in case someone else needs it
#Configuration
class SetAggregatorProcessor : Function<Flux<Map<String, Any>>, Flux<MutableList<Map<String, Any>>>> {
override fun apply(data: Flux<Map<String, Any>>):Flux<MutableList<Map<String, Any>>> {
return data.window(Duration.ofSeconds(20)).flatMap { window: Flux<Map<String, Any>> ->
this.aggregateList(window)
}
}
private fun aggregateList(group: Flux<Map<String, Any>>): Mono<MutableList<Map<String, Any>>>? {
return group.reduce(
mutableListOf(),
BiFunction<MutableList<Map<String, Any>>, Map<String, Any>, MutableList<Map<String, Any>>> {
acumulator: MutableList<Map<String, Any>>, element: Map<String, Any> ->
acumulator.add(element)
acumulator
}
)
}
}
update https://github.com/chalimbu/AggregatorQuestionStack/tree/main/src/main/kotlin/com/project/co/SetAggregator

Spring RabbitMq Listener Configuration

We are using RabbitMq with default spring boot configurations. We have a use case in which we want no parallelism for one of the listeners. That is, we want only one thread of the consumer to be running at any given point in time. We want this, because the nature of the use case is such that we want the messages to be consumed in order, thus if there are multiple threads per consumer there can be chances that the messages are processed out of order.
Since, we are using the defaults and have not explicitly tweaked the container, we are using the SimpleMessageListenerContainer. By looking at the documentation I tried fixing the number of consumers using concurrency = "1" . The annotation on the target method looks like this #RabbitListener(queues = ["queue-name"], concurrency = "1").
As per the documentation this should have ensured that there is only consumer thread.
{#link org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer
* SimpleMessageListenerContainer} if this value is a simple integer, it sets a fixed
* number of consumers in the {#code concurrentConsumers} property
2021-10-29 06:11:26.361 INFO 29752 --- [ntContainer#4-1] c.t.t.i.p.s.xxx : Created xxx
2021-10-29 06:11:26.383 INFO 29752 --- [ntContainer#0-1] c.t.t.i.p.s.xxx : Created xxx
ThreadIds to be noted here are [ntContainer#4-1] and [ntContainer#0-1].
So the question is- how can we ensure that there is only one thread per consumer at any given point in time ?
Edit: Adding the code of the consumer class for more context
#ConditionalOnProperty(value = ["rabbitmq.sharebooking.enabled"], havingValue = "true", matchIfMissing = false)
class ShareBookingConsumer #Autowired constructor(
private val shareBookingRepository: ShareBookingRepository,
private val objectMapper: ObjectMapper,
private val shareDtoToShareBookingConverter: ShareBookingDtoToShareBookingConverter
) {
private val logger = LoggerFactory.getLogger(javaClass)
init {
logger.info("start sharebooking created consumer")
}
#RabbitListener(queues = ["tax_engine.share_booking"], concurrency = "1-1", exclusive = true)
#Timed
#Transactional
fun consumeShareBookingCreatedEvent(message: Message) {
try {
consumeShareBookingCreatedEvent(message.body)
} catch (e: Exception) {
throw AmqpRejectAndDontRequeueException(e)
}
}
private fun consumeShareBookingCreatedEvent(event: ByteArray) {
toShareBookingCreationMessageEvent(event).let { creationEvent ->
RmqMetrics.measureEventMetrics(creationEvent)
val shareBooking = shareDtoToShareBookingConverter.convert(creationEvent.data)
val persisted = shareBookingRepository.save(shareBooking)
logger.info("Created shareBooking ${creationEvent.data.id}")
}
}
private fun toShareBookingCreationMessageEvent(event: ByteArray) =
objectMapper.readValue(event, shareBookingCreateEventType)
companion object {
private val shareBookingCreateEventType =
object : TypeReference<RMQMessageEnvelope<ShareBookingCreationDto>>() {}
}
}
Edit: Adding application thread analysis using visualvm
5 threads get created for 5 listeners.
[1]: https://i.stack.imgur.com/gQINE.png
Set concurrency = "1-1". Note that the concurrency of the Listener depends not only on concurrentConsumers, but also on maxConcurrentConsumers:
If you are using a custom factory:
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(CachingConnectionFactory cachingConnectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(cachingConnectionFactory);
factory.setConcurrentConsumers(1);
factory.setMaxConcurrentConsumers(1);
return factory;
}
See: https://docs.spring.io/spring-amqp/docs/current/reference/html/#simplemessagelistenercontainer for detail.
EDIT:
I did a simple test, 2 consumers&2 threads:
#RabbitListener(queues = "myQueue111", concurrency = "1-1")
public void handleMessage(Object message) throws InterruptedException {
LOGGER.info("Received message : {} in {}", message, Thread.currentThread().getName());
}
#RabbitListener(queues = "myQueue222", concurrency = "1-1")
public void handleMessag1e(Object message) throws InterruptedException {
LOGGER.info("Received message222 : {} in {}", message, Thread.currentThread().getName());
}
Try this:
#RabbitListener(queues = ["queue-name"], exclusive = true)
See https://docs.spring.io/spring-amqp/docs/current/reference/html/#exclusive-consumer

how to consume events from kafka by a Spring Rest endpoint

I'm new to Kafka. I've seen that the consumer is "always running" and retrieves messages from a topic as soon as been published.
In a typical database web application you have a rest API that connects to DB and returns some response.
From what I see the consumer stays active and never close.
So I don't figure out how to return a subset of messages from a topic based on client request.
I thought the service would create a consumer to get what I need, but as far as consumer never close, I guess my opinion is not correct.
What should I do?
Then it's a simple question of persisting messages rceived thru KafkaListener, let's say adding each of them to a simple collecton (along with its timestamp) and implementing an endpoint to filter the messages accordingly and returning some of them.
#Controller
public class KafkaController {
#Autowired
private KafkaProducerConfig kafkaProducerConfig;
private Map<Date, String> msgMap = new HashMap();
#KafkaListener(topics = "myTopic", groupId = "myGroup")
public void listenAndAddMsg(String message) {
msgMap.put(new Date(), message);
}
#PostMapping("messages")
#ResponseBody
public String filterMessages(#RequestBody Interval interval) {
return msgMap.entrySet()
.stream()
.filter(map -> map.getKey().after(interval.getStartDate()) && map.getKey().before(interval.getEndDate()))
.collect(Collectors.toMap(map -> map.getKey(), map -> map.getValue()));
}
}
public class Interval {
private Date startDate;
private Date endDate;
// setters and getters
}

Performing Aggregation of records and launch spring cloud task in single Processor in Spring cloud stream

I am trying to perform the following actions
Aggregating messages
Launching Spring Cloud Task
But not able to pass the aggregated message to the method launching Task. Below is the piece of code
#Autowired
private TaskProcessorProperties processorProperties;
#Autowired
Processor processor;
#Autowired
private AppConfiguration appConfiguration ;
#Transformer(inputChannel = MyProcessor.intermidiate, outputChannel = Processor.OUTPUT)
public Object setupRequest(String message) {
Map<String, String> properties = new HashMap<>();
if (StringUtils.hasText(this.processorProperties.getDataSourceUrl())) {
properties.put("spring_datasource_url", this.processorProperties.getDataSourceUrl());
}
if (StringUtils.hasText(this.processorProperties.getDataSourceDriverClassName())) {
properties.put("spring_datasource_driverClassName", this.processorProperties
.getDataSourceDriverClassName());
}
if (StringUtils.hasText(this.processorProperties.getDataSourceUserName())) {
properties.put("spring_datasource_username", this.processorProperties
.getDataSourceUserName());
}
if (StringUtils.hasText(this.processorProperties.getDataSourcePassword())) {
properties.put("spring_datasource_password", this.processorProperties
.getDataSourcePassword());
}
properties.put("payload", message);
TaskLaunchRequest request = new TaskLaunchRequest(
this.processorProperties.getUri(), null, properties, null,
this.processorProperties.getApplicationName());
System.out.println("inside task launcher **************************");
System.out.println(request.toString() +"**************************");
return new GenericMessage<>(request);
}
#ServiceActivator(inputChannel = Processor.INPUT,outputChannel = MyProcessor.intermidiate)
#Bean
public MessageHandler aggregator() {
AggregatingMessageHandler aggregatingMessageHandler =
new AggregatingMessageHandler(new DefaultAggregatingMessageGroupProcessor(),
new SimpleMessageStore(10));
AggregatorFactoryBean aggregatorFactoryBean = new AggregatorFactoryBean();
//aggregatorFactoryBean.setMessageStore();
//aggregatingMessageHandler.setOutputChannel(processor.output());
//aggregatorFactoryBean.setDiscardChannel(processor.output());
aggregatingMessageHandler.setSendPartialResultOnExpiry(true);
aggregatingMessageHandler.setSendTimeout(1000L);
aggregatingMessageHandler.setCorrelationStrategy(new ExpressionEvaluatingCorrelationStrategy("'FOO'"));
aggregatingMessageHandler.setReleaseStrategy(new MessageCountReleaseStrategy(3)); //ExpressionEvaluatingReleaseStrategy("size() == 5")
aggregatingMessageHandler.setExpireGroupsUponCompletion(true);
aggregatingMessageHandler.setGroupTimeoutExpression(new ValueExpression<>(3000L)); //size() ge 2 ? 5000 : -1
aggregatingMessageHandler.setExpireGroupsUponTimeout(true);
return aggregatingMessageHandler;
}
To pass the message between aggregator and task launcher method (setupRequest(String message)) , i am using a channel MyProcessor.intermidiate defined as below
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.stereotype.Indexed;
public interface MyProcessor {
String intermidiate = "intermidiate";
#Output("intermidiate")
MessageChannel intermidiate();
}
Applicaion.properties used is below
aggregator.message-store-type=persistentMessageStore
spring.cloud.stream.bindings.input.destination=output
spring.cloud.stream.bindings.output.destination=input
Its not working , With the above mentioned approach .
In this class if i change the channel name from my defined channel MyProcessor.intermediate to Processor.input or Processor.output than any one of the things works (based on the channel name changed to Processor.*)
I want to aggregate the messages first and than want to launch task on aggragated messages in processor, which is not happening
See here:
public Object setupRequest(String message) {
So, you expect some string as a request payload.
Your AggregatorFactoryBean use a DefaultAggregatingMessageGroupProcessor, which does exactly this:
List<Object> payloads = new ArrayList<Object>(messages.size());
for (Message<?> message : messages) {
payloads.add(message.getPayload());
}
return payloads;
So, it is definitely not a String.
It is strange that you don't show what exception happens with your configuration, but I assume you need to change setupRequest() signature to expect a List of payloads or you need to provide some custom MessageGroupProcessor to build that String from the group of messages you have aggregated.

Problem with start of #KafkaListener (Spring)

What is need
I'm writing an application (Spring + Kotlin) that takes information with Kafka. If I set autoStartup = "true" when declaring a #KafkaListener then the app works fine but only if broker is available. When the broker is unavailable application crashes on start. It's undesirable behavior. The application must work and perform other functions.
What I tried to do
For the escape of crashing application on start somebody on this site in another topic advised setting autoStartup = "false" when declaring a #KafkaListener. And it really helped to prevent crash on start. But now I cannot successfully start KafkaListener manually. In other examples I saw auto wiring of KafkaListenerEndpointRegistry, but when I trying to do it:
#Service
class KafkaConsumer #Autowired constructor(
private val kafkaListenerEndpointRegistry: KafkaListenerEndpointRegistry
) {
IntelliJ Idea warns:
Could not autowire. No beans of 'KafkaListenerEndpointRegistry' type found.
When I try to use KafkaListenerEndpointRegistry without autowiring and perform this code:
#Service
class KafkaConsumer {
private val logger = LoggerFactory.getLogger(this::class.java)
private val kafkaListenerEndpointRegistry = KafkaListenerEndpointRegistry()
#Scheduled(fixedDelay = 10000)
fun startCpguListener(){
val container = kafkaListenerEndpointRegistry.getListenerContainer("consumer1")
if (!container.isRunning)
try {
logger.info("Kafka Consumer is not running. Trying to start...")
container.start()
} catch (e: Exception){
logger.error(e.message)
}
}
#KafkaListener(
id = "consumer1",
topics = ["cpgdb.public.user"],
autoStartup = "false"
)
private fun listen(it: ConsumerRecord<JsonNode, JsonNode>, qwe: Consumer<Any, Any>){
val pay = it.value().get("payload")
val after = pay.get("after")
val id = after["id"].asInt()
val receivedUser = CpguUser(
id = id,
name = after["name"].asText()
)
logger.info("received user with id = $id")
}
}
}
kafkaListenerEndpointRegistry.getListenerContainer("consumer1") always return null. I guess it's because I didn't auto wire kafkaListenerEndpointRegistry. How can I do it? Or if exist another solution of my answer I'll be appreciative any help! Thanks!
There is Kafka config:
#Configuration
#EnableConfigurationProperties(KafkaProperties::class)
class KafkaConfiguration(private val props: KafkaProperties) {
#Bean
fun kafkaListenerContainerFactory(): ConcurrentKafkaListenerContainerFactory<Any, Any> {
val factory = ConcurrentKafkaListenerContainerFactory<Any, Any>()
factory.consumerFactory = consumerFactory()
factory.setConcurrency(1)
factory.setMessageConverter(MessagingMessageConverter())
factory.setStatefulRetry(true)
val retryTemplate = RetryTemplate()
retryTemplate.setRetryPolicy(AlwaysRetryPolicy())
retryTemplate.setBackOffPolicy(ExponentialBackOffPolicy())
factory.setRetryTemplate(retryTemplate)
val handler = SeekToCurrentErrorHandler()
handler.isAckAfterHandle = false
factory.setErrorHandler(handler)
factory.containerProperties.isMissingTopicsFatal = false
return factory
}
#Bean
fun consumerFactory(): ConsumerFactory<Any, Any> {
return DefaultKafkaConsumerFactory(consumerConfigs())
}
#Bean
fun consumerConfigs(): Map<String, Any> {
return mapOf(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG to props.bootstrap.address,
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG to JsonDeserializer::class.java,
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG to JsonDeserializer::class.java,
ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG to listOf(MonitoringConsumerInterceptor::class.java),
ConsumerConfig.CLIENT_ID_CONFIG to props.receiver.clientId,
ConsumerConfig.GROUP_ID_CONFIG to props.receiver.groupId,
ConsumerConfig.AUTO_OFFSET_RESET_CONFIG to "earliest",
ConsumerConfig.ISOLATION_LEVEL_CONFIG to "read_committed",
ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG to true
)
}
}
spring boot version: 2.3.0
spring-kafka version: 2.5.3
kafka-clients version: 2.5.0
Just ignore IntelliJ's warning about the auto wiring; the bean does exist; it's just that IntelliJ can't detect it.

Resources