Message grouping with Artemis and Spring JMS not working when using transacted sessions - spring-boot

Message grouping doesn't appear to be working
My Producer application sends the message to the queue via JMS MessageProducer after setting the string property JMSXGroupID to 'product=paper'
My producer application sends another message the same way also with 'product=paper'.
I can see both messages in the queue when I browse that message's headers in the Artemis UI. _AMQ_GROUP_ID has a value of 'product=paper' in both. JMSXGroupID is absent.
When I debug my listener application which uses Spring JMS with a concurrency of 15-15 (15 min 15 max) I can see both messages come through logged under different listener containers. When I look at the map of headers for each, _AMQ_GROUP_ID is absent and JMSXGroupID has a value of null instead of 'product=paper'.
Why isn't message grouping with group id working? Does it have to do with the fact that Artemis didn't translate _AMQ_GROUP_ID back to JMSXGroupID? Or is Spring JMS not registering its multiple consumer threads as different consumers for the broker to see multiple consumers?
Edit:
I was able to get message grouping to work in my application by commenting out lines having to do with using transacted sessions from my container factory bean method. It seems to have to do with using transacted sessions.
Edit2:
Here's a self contained application running against a local standalone Artemis broker (version 2.10.1) and using Spring Boot 2.2.0:
GroupidApplication (spring boot application and beans):
package com.reproduce.groupid;
import java.util.HashMap;
import java.util.Map;
import javax.jms.ConnectionFactory;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.Session;
import org.apache.activemq.artemis.api.core.TransportConfiguration;
import org.apache.activemq.artemis.api.jms.ActiveMQJMSClient;
import org.apache.activemq.artemis.api.jms.JMSFactoryType;
import org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory;
import org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.jms.DefaultJmsListenerContainerFactoryConfigurer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Primary;
import org.springframework.jms.annotation.EnableJms;
import org.springframework.jms.config.DefaultJmsListenerContainerFactory;
import org.springframework.jms.config.JmsListenerContainerFactory;
import org.springframework.jms.connection.JmsTransactionManager;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.jms.support.converter.MappingJackson2MessageConverter;
import org.springframework.jms.support.converter.MessageConverter;
import org.springframework.jms.support.converter.MessageType;
#SpringBootApplication
#EnableJms
public class GroupidApplication implements CommandLineRunner {
private static Logger LOG = LoggerFactory
.getLogger(GroupidApplication.class);
#Autowired
private JmsTemplate jmsTemplate;
#Autowired MessageConverter messageConverter;
public static void main(String[] args) {
LOG.info("STARTING THE APPLICATION");
SpringApplication.run(GroupidApplication.class, args);
LOG.info("APPLICATION FINISHED");
}
#Override
public void run(String... args) throws JMSException {
LOG.info("EXECUTING : command line runner");
jmsTemplate.setPubSubDomain(true);
createAndSendTextMessage("Message1");
createAndSendTextMessage("Message2");
createAndSendTextMessage("Message3");
createAndSendTextMessage("Message4");
createAndSendTextMessage("Message5");
createAndSendTextMessage("Message6");
}
private void createAndSendTextMessage(String messageBody) {
jmsTemplate.send("local-queue", session -> {
Message message = session.createTextMessage(messageBody);
message.setStringProperty("JMSXGroupID", "product=paper");
return message;
});
}
// BEANS
#Bean
public JmsListenerContainerFactory<?> containerFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer, JmsTransactionManager jmsTransactionManager) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setSubscriptionDurable(true);
factory.setSubscriptionShared(true);
factory.setSessionAcknowledgeMode(Session.SESSION_TRANSACTED);
factory.setSessionTransacted(Boolean.TRUE);
factory.setTransactionManager(jmsTransactionManager);
return factory;
}
#Bean
public MessageConverter jacksonJmsMessageConverter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName("_type");
return converter;
}
#Bean
#Primary
public JmsTransactionManager jmsTransactionManager(ConnectionFactory connectionFactory) {
JmsTransactionManager jmsTransactionManager = new JmsTransactionManager(connectionFactory);
// Lazily retrieve existing JMS Connection from given ConnectionFactory
jmsTransactionManager.setLazyResourceRetrieval(true);
return jmsTransactionManager;
}
#Bean
#Primary
public ConnectionFactory connectionFactory() throws JMSException {
// Create ConnectionFactory which enables failover between primary and backup brokers
ActiveMQConnectionFactory activeMqConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithHA(
JMSFactoryType.CF, transportConfigurations());
activeMqConnectionFactory.setBrokerURL("tcp://localhost:61616?jms.redeliveryPolicy.maximumRedeliveries=1");
activeMqConnectionFactory.setUser("admin");
activeMqConnectionFactory.setPassword("admin");
activeMqConnectionFactory.setInitialConnectAttempts(1);
activeMqConnectionFactory.setReconnectAttempts(5);
activeMqConnectionFactory.setConsumerWindowSize(0);
activeMqConnectionFactory.setBlockOnAcknowledge(true);
activeMqConnectionFactory.setCacheDestinations(true);
activeMqConnectionFactory.setRetryInterval(1000);
return activeMqConnectionFactory;
}
private static TransportConfiguration[] transportConfigurations() {
String connectorFactoryFqcn = NettyConnectorFactory.class.getName();
Map<String, Object> primaryTransportParameters = new HashMap<>(2);
primaryTransportParameters.put("host", "localhost");
primaryTransportParameters.put("port", "61616");
TransportConfiguration primaryTransportConfiguration = new TransportConfiguration(connectorFactoryFqcn,
primaryTransportParameters);
return new TransportConfiguration[] { primaryTransportConfiguration,
new TransportConfiguration(connectorFactoryFqcn) };
}
}
CustomSpringJmsListener:
package com.reproduce.groupid;
import javax.jms.JMSException;
import javax.jms.TextMessage;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.jms.annotation.JmsListener;
import org.springframework.stereotype.Component;
#Component
public class CustomSpringJmsListener {
protected final Logger LOG = LoggerFactory.getLogger(getClass());
#JmsListener(destination = "local-queue", subscription = "groupid-example", containerFactory = "containerFactory", concurrency = "15-15")
public void receive(TextMessage message) throws JMSException {
LOG.info("Received message: " + message);
}
}
pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.2.0.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.reproduce</groupId>
<artifactId>groupid</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>groupid</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-artemis</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.datatype</groupId>
<artifactId>jackson-datatype-jsr310</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
You can see that even though all of these messages have the same group id, they get logged by different listener container threads. If you comment out the transaction manager from the bean definition it starts working again.

It's all about consumer caching. By default, when using an external TXM, caching is disabled so each message is received on a new consumer.
For this app, you really don't need a transaction manager, sessionTransacted is enough - the container will use local transactions.
If you must use an external transaction manager for some reason, consider changing the cache level.
factory.setCacheLevel(DefaultMessageListenerContainer.CACHE_CONSUMER);
See the DMLC javadocs...
/**
* Specify the level of caching that this listener container is allowed to apply.
* <p>Default is {#link #CACHE_NONE} if an external transaction manager has been specified
* (to reobtain all resources freshly within the scope of the external transaction),
* and {#link #CACHE_CONSUMER} otherwise (operating with local JMS resources).
* <p>Some Java EE servers only register their JMS resources with an ongoing XA
* transaction in case of a freshly obtained JMS {#code Connection} and {#code Session},
* which is why this listener container by default does not cache any of those.
* However, depending on the rules of your server with respect to the caching
* of transactional resources, consider switching this setting to at least
* {#link #CACHE_CONNECTION} or {#link #CACHE_SESSION} even in conjunction with an
* external transaction manager.
* #see #CACHE_NONE
* #see #CACHE_CONNECTION
* #see #CACHE_SESSION
* #see #CACHE_CONSUMER
* #see #setCacheLevelName
* #see #setTransactionManager
*/
public void setCacheLevel(int cacheLevel) {

Related

Oracle Advanced Queuing and Jakarta namespace

I am using Oracle AQ in my Java Spring Boot application. I have Oracle JMS implementation AQAPI as dependencies.
Recently I had tried to update the application to Spring Boot 3.x which is build on Jakarta, not Javax namespace. However my code is no longer compatible with Oracle AQ since is using Javax namespace, i.e. javax.jms.Connection.
So question how to solve this problem? Seems like Oracle has not yet produced new version of AQAPI compatible with Jakarta MS
I had the same problem when upgrading to Spring boot 3, so I wrote an adapter to wrap javax.jms based AQAPI as jakarta.jms:
<dependency>
<groupId>net.sf.gavgav</groupId>
<artifactId>jakarta-javax-jms-adapter</artifactId>
<version>1.0.0</version>
</dependency>
This is just a collection of jakarata.jms interfaces delegating calls to corresponding javax.jms implementation:
https://sourceforge.net/p/jakarta-javax-jms-adapter/code/ci/master/tree/src/main/java/net/sf/gavgav/jakartajavax/jms/
For example:
Wrapping AQjmsFactory (javax.jms.ConnectionFactory) as jakarta.jms.ConnectionFactory in Spring boot 3:
import java.sql.SQLException;
import javax.sql.DataSource;
import jakarta.jms.ConnectionFactory;
import jakarta.jms.JMSException;
import net.sf.gavgav.jakartajavax.jms.JakartaJmsConnectionFactory;
import net.sf.gavgav.jakartajavax.jms.JmsException;
import oracle.jms.AQjmsFactory;
...
#Bean
public ConnectionFactory connectionFactory(DataSource ds) throws JMSException, SQLException {
try {
return new JakartaJmsConnectionFactory(AQjmsFactory.getQueueConnectionFactory(ds));
} catch (javax.jms.JMSException e) {
throw JmsException.wrap(e);
}
}
Implementing Spring's DestinationResolver for JmsTemplate or DefaultJmsListenerContainerFactory:
import net.sf.gavgav.jakartajavax.jms.JakartaJmsSession;
import net.sf.gavgav.jakartajavax.jms.JakartaJmsQueue;
import net.sf.gavgav.jakartajavax.jms.JmsException;
import jakarta.jms.Destination;
import jakarta.jms.Session;
import oracle.jms.AQjmsSession;
import org.springframework.jms.support.destination.DestinationResolver;
public class AqDestinationResolver implements DestinationResolver {
private final String schema;
public AqDestinationResolver(String schema) {
this.schema = schema;
}
#Override
public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException {
JakartaJmsSession jakartaSession = (JakartaJmsSession) session;
try {
AQjmsSession aqjmsSession = ((AQjmsSession) jakartaSession.getSession());
javax.jms.Queue aqjmsQueue = aqjmsSession.getQueue(schema, destinationName);
return new JakartaJmsQueue(aqjmsQueue);
} catch (javax.jms.JMSException e) {
throw JmsException.wrap(e);
}
}
}

How to Configure JMS Listener with logging Sleuth Trace ID in SQS Communication

I am using spring JMS to send and recieve message to AWS SQS. Below is my JMS configuration class.
I am using spring sleuth to log trace id.
What i want to achive is the trace id when logged should be same in producer class and in consumer class.
But i am seeing different trace id. How to solve this issue?
import javax.jms.ConnectionFactory;
import javax.jms.Session;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.http.converter.json.Jackson2ObjectMapperBuilder;
import org.springframework.jms.annotation.EnableJms;
import org.springframework.jms.config.DefaultJmsListenerContainerFactory;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.jms.support.converter.MappingJackson2MessageConverter;
import org.springframework.jms.support.converter.MessageConverter;
import org.springframework.jms.support.converter.MessageType;
import org.springframework.jms.support.destination.DynamicDestinationResolver;
import com.amazon.sqs.javamessaging.SQSConnectionFactory;
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.fasterxml.jackson.annotation.JsonInclude;
import com.fasterxml.jackson.databind.util.ISO8601DateFormat;
import brave.Tracing;
import brave.jms.JmsTracing;
import brave.messaging.MessagingTracing;
/**
* Description of JMSConfig
*
* #author
* #version Nov 25, 2020
*/
#Configuration
#EnableJms
#Profile({ "Generic" })
public class JmsConfig {
#Autowired
private Tracing tracing;
SQSConnectionFactory connectionFactory = SQSConnectionFactory.builder().withRegion(Region.getRegion(
Regions.US_EAST_1)).withAWSCredentialsProvider(new DefaultAWSCredentialsProviderChain()).build();
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
ConnectionFactory tracingConnectionFactory = getConnectionFactoryWrappedWithTracing(connectionFactory);
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(tracingConnectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setConcurrency("3-10");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setMessageConverter(messageConverter());
return factory;
}
#Bean
public MessageConverter messageConverter() {
Jackson2ObjectMapperBuilder builder = new Jackson2ObjectMapperBuilder();
builder.serializationInclusion(JsonInclude.Include.NON_EMPTY);
builder.dateFormat(new ISO8601DateFormat());
org.springframework.jms.support.converter.MappingJackson2MessageConverter mappingJackson2MessageConverter = new MappingJackson2MessageConverter();
mappingJackson2MessageConverter.setObjectMapper(builder.build());
mappingJackson2MessageConverter.setTargetType(MessageType.TEXT);
mappingJackson2MessageConverter.setTypeIdPropertyName("documentType");
return mappingJackson2MessageConverter;
}
#Bean
public JmsTemplate defaultJmsTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate(this.connectionFactory);
jmsTemplate.setMessageConverter(messageConverter());
return jmsTemplate;
}
#Bean
public ConnectionFactory getConnectionFactoryWrappedWithTracing(SQSConnectionFactory sqsConnectionFactory) {
MessagingTracing messagingTracing = MessagingTracing.newBuilder(tracing).build();
JmsTracing jmsTracing = JmsTracing.create(messagingTracing);
ConnectionFactory tracingConnectionFactory = jmsTracing.connectionFactory(sqsConnectionFactory);
return tracingConnectionFactory;
}
}
MEssage sender method
public String postMessage(String message) {
log.info("Sending message");
jmsTemplate.convertAndSend(queueName, new User(message));
return message;
}
Message reciever
#JmsListener(destination = "${app.queue.name}", containerFactory = "jmsListenerContainerFactory")
public void receiveMessage(User message) throws JsonProcessingException {
log.info("Received message: " + message);
log.info("Received message: {}", message.getName());
}
When i try to send message i can see different trace id 676d14076ee1cbb7 for producer and 32mc3464298c2503b35 for consumer. Ideally i want trace id to be same.
13:37:12 INFO cloud-messaging-service 676d14076ee1cbb7 GenericMessageProducer: Sending message
13:37:14 INFO cloud-messaging-service 676d14076ee1cbb7 SQSMessageProducer: Message sent to SQS with SQS-assigned messageId: 443f6d62-ac20-406a-9ab4-be4bfb1db8d3
13:37:14 INFO cloud-messaging-service 676d14076ee1cbb7 SQSSession: Shutting down SessionCallBackScheduler executor
13:37:21 INFO cloud-messaging-service c3464298c2503b35 GenericMessageConsumer: Received message: User{name=viyaan}
13:37:21 INFO cloud-messaging-service c3464298c2503b35 GenericMessageConsumer: Received message: viyaan
POM Dependency
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>${aws-java-sdk}</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>amazon-sqs-java-messaging-lib</artifactId>
<version>${amazon-sqs-java-messaging-lib}</version>
<type>jar</type>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-jms</artifactId>
</dependency>
<dependency>
<groupId>io.zipkin.aws</groupId>
<artifactId>brave-instrumentation-aws-java-sdk-sqs</artifactId>
<version>${brave-instrumentation-aws-java-sdk-sqs}</version>
</dependency>
I had to tweak JMSTemplate, i have to add tracingConnectionFactory in JMsTemplate to make it work
#Bean
public JmsTemplate defaultJmsTemplate() {
ConnectionFactory tracingConnectionFactory = getConnectionFactoryWrappedWithTracing(sqsConnectionFactory());
JmsTemplate jmsTemplate = new JmsTemplate(tracingConnectionFactory);
jmsTemplate.setMessageConverter(messageConverter());
return jmsTemplate;
}

#Timed not working despite registering TimedAspect explicitly - spring boot 2.1

I need to measure method-metrics using micrometer #Timed annotation. As it doesn't work on arbitrary methods; i added the configuration of #TimedAspect explicitly in my spring config. Have referred to this post for exact config
Note: have tried adding a separate config class JUST for this, as well as including the TimedAspect bean as part of my existing configuration bean
How to measure service methods using spring boot 2 and micrometer
Yet, it unfortunately doesn't work. The Bean is registred and the invocation from config class goes thru successfully on startup. Found this while debugging. However, the code in the #Around never seems to execute.
No error is thrown; and im able to view the default 'system' metrics on the /metrics and /prometheus endpoint.
Note: This is AFTER getting the 'method' to be invoked several times by executing a business flow. I'm aware that it probably doesn't show up in the metrics if the method isn't invoked at all
Versions: spring-boot 2.1.1, spring 5.3, micrometer 1.1.4, actuator 2.1
Tried everything going by the below posts:
How to measure service methods using spring boot 2 and micrometer
https://github.com/izeye/sample-micrometer-spring-boot/tree/timed-annotation
https://github.com/micrometer-metrics/micrometer/issues/361
Update: So, the issue seems to be ONLY when the Timed is on an abstract method, which is called via another method. Was able to reproduce it via a simple example. Refer to the #Timed("say_hello_example") annotation. It simply gets ignored and doesnt show up when i hit the prometheus endpoint.
Code:
Abstract Class
public abstract class AbstractUtil {
public abstract void sayhello();
public void sayhellowithtimed(String passedVar) {
System.out.println("Passed var =>"+passedVar);
System.out.println("Calling abstract sayhello....");
sayhello();
}
}
Impl Class
#Component
#Scope("prototype")
public class ExampleUtil extends AbstractUtil {
public static final String HELLO = "HELLO";
#Timed("dirwatcher_handler")
public void handleDirectoryWatcherChange(WatchEvent event){
System.out.println("Event kind:" + event.kind() + ". File affected: " + event.context());
}
#Timed("say_hello_example")
#Override
public void sayhello() {
System.out.println(HELLO);
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
}
A simple DirWatcher implementation class...
package com.example;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.event.ApplicationStartedEvent;
import org.springframework.context.ApplicationListener;
import org.springframework.context.annotation.Scope;
import org.springframework.context.event.ContextRefreshedEvent;
import org.springframework.stereotype.Component;
import java.io.IOException;
import java.nio.file.*;
#Component
#Scope("prototype")
public class StartDirWatcher implements ApplicationListener<ApplicationStartedEvent> {
#Value("${directory.path:/apps}")
public String directoryPath;
#Autowired
private ExampleUtil util;
private void monitorDirectoryForChanges() throws IOException, InterruptedException {
WatchService watchService = FileSystems.getDefault().newWatchService();
Path path = Paths.get(directoryPath);
path.register(
watchService,
StandardWatchEventKinds.ENTRY_CREATE,
StandardWatchEventKinds.ENTRY_DELETE,
StandardWatchEventKinds.ENTRY_MODIFY);
WatchKey key;
while ((key = watchService.take()) != null) {
for (WatchEvent<?> event : key.pollEvents()) {
util.handleDirectoryWatcherChange(event);
util.sayhellowithtimed("GOD_OF_SMALL_THINGS_onAPPEvent");
}
key.reset();
}
}
#Override
public void onApplicationEvent(ApplicationStartedEvent applicationStartedEvent) {
try {
monitorDirectoryForChanges();
} catch (Throwable e) {
System.err.println("ERROR!! "+e.getMessage());
e.printStackTrace();
}
}
}
The Spring Boot Application Class
package com.example;
import io.micrometer.core.aop.TimedAspect;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.prometheus.PrometheusMeterRegistry;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.actuate.autoconfigure.metrics.MeterRegistryCustomizer;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.EnableAspectJAutoProxy;
#EnableAspectJAutoProxy
#ComponentScan
#Configuration
#SpringBootApplication
public class ExampleStarter{
#Bean
MeterRegistryCustomizer<PrometheusMeterRegistry> metricsCommonTags() {
return registry -> registry.config().commonTags("app.name", "example.app");
}
#Bean
TimedAspect timedAspect(MeterRegistry reg) {
return new TimedAspect(reg);
}
public static void main(String[] args) {
SpringApplication.run(ExampleStarter.class, args);
}
}
The main pom.xml file
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.metrics.timed.example</groupId>
<artifactId>example-app</artifactId>
<version>1.0-SNAPSHOT</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<version>2.1.1.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
<version>2.1.1.RELEASE</version>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
<version>1.1.2</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
<version>2.1.1.RELEASE</version>
</dependency>
</dependencies>
I use spring boot 2.2.6.RELEASE and this MetricConfig works for me
#Configuration
public class MetricConfig {
#Bean
MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
return registry -> registry.config().commonTags("application", "my app");
}
#Bean
TimedAspect timedAspect(MeterRegistry registry) {
return new TimedAspect(registry);
}
}
In application.yml
management:
endpoints:
web:
exposure:
include: ["health", "prometheus"]
endpoint:
beans:
cache:
time-to-live: 10s
#Timed use AOP(Aspect oriented programming) concept, in which proxy doesn't pass on to the second level of the method.
you can define the second level of method in new bean/class. this way #Timed will work for second level of method call.
I had the same problem, in my case I realised that the metric got visible under actuator/metrics only after the method had been called at least once.
Unlike with manually created timers/counters, where they get visible directly after startup.

Retry mechanism for producer's client of ActiveMQ with JMS and spring

Is there a mechanism or example implementation of a retry mechanism/solution for a producer using ActiveMQ with JMS (more precisely, with JmsTemplate) and spring framework ?
My use case, which I want to handle is, when the broker is not available, for example, I want to make some number of retries, maximum 6 (if possible with exponential delays between each). So, I need also to track the number of retries for a message between each attempt.
I am aware the the redelivery policy for the consumer, but also I want to implement a reliable producer's client side as well
Thanks,
Simeon
i think that the easiest way is to use what exists for this by using an embedded broker with persistence enabled which must be used by the producer to send the messages to and by creating a Camel route to read from local Queue and forward to the remote one or by using a JmsBridgeConnector or NetworkConnector nut i think the JmsBridgeConnector is easier.
here is an Spring code example :
producer have to use jmsConnectionFactory() to create a ConnectionFactory
package com.example.amq;
import java.io.File;
import javax.jms.ConnectionFactory;
import javax.jms.QueueConnectionFactory;
import org.apache.activemq.ActiveMQConnectionFactory;
import org.apache.activemq.broker.BrokerService;
import org.apache.activemq.network.jms.JmsConnector;
import org.apache.activemq.network.jms.OutboundQueueBridge;
import org.apache.activemq.network.jms.ReconnectionPolicy;
import org.apache.activemq.network.jms.SimpleJmsQueueConnector;
import org.apache.activemq.store.PersistenceAdapter;
import org.apache.activemq.store.kahadb.KahaDBPersistenceAdapter;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class ActiveMQConfiguration {
public static final String DESTINATION_NAME = "localQ";
#Bean // (initMethod = "start", destroyMethod = "stop")
public BrokerService broker() throws Exception {
final BrokerService broker = new BrokerService();
broker.addConnector("vm://localhost");
SimpleJmsQueueConnector simpleJmsQueueConnector = new SimpleJmsQueueConnector();
OutboundQueueBridge bridge = new OutboundQueueBridge();
bridge.setLocalQueueName(DESTINATION_NAME);
bridge.setOutboundQueueName("remoteQ");
OutboundQueueBridge[] outboundQueueBridges = new OutboundQueueBridge[] { bridge };
simpleJmsQueueConnector.getReconnectionPolicy().setMaxSendRetries(ReconnectionPolicy.INFINITE);
simpleJmsQueueConnector.setOutboundQueueBridges(outboundQueueBridges);
simpleJmsQueueConnector.setLocalQueueConnectionFactory((QueueConnectionFactory) jmsConnectionFactory());
simpleJmsQueueConnector.setOutboundQueueConnectionFactory(outboundQueueConnectionFactory());
JmsConnector[] jmsConnectors = new JmsConnector[] { simpleJmsQueueConnector };
broker.setJmsBridgeConnectors(jmsConnectors);
PersistenceAdapter persistenceAdapter = new KahaDBPersistenceAdapter();
File dir = new File(System.getProperty("user.home") + File.separator + "kaha");
if (!dir.exists()) {
dir.mkdirs();
}
persistenceAdapter.setDirectory(dir);
broker.setPersistenceAdapter(persistenceAdapter);
broker.setPersistent(true);
broker.setUseShutdownHook(false);
broker.setUseJmx(true);
return broker;
}
#Bean
public QueueConnectionFactory outboundQueueConnectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(
"auto://localhost:5671");
connectionFactory.setUserName("admin");
connectionFactory.setPassword("admin");
return connectionFactory;
}
#Bean
public ConnectionFactory jmsConnectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost");
connectionFactory.setObjectMessageSerializationDefered(true);
connectionFactory.setCopyMessageOnSend(false);
return connectionFactory;
}
}
By using Camel :
import org.apache.activemq.camel.component.ActiveMQComponent;
import org.apache.activemq.camel.component.ActiveMQConfiguration;
import org.apache.camel.CamelContext;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
public class ActiveMQCamelBridge {
public static void main(String args[]) throws Exception {
CamelContext context = new DefaultCamelContext();
context.addComponent("inboundQueue", ActiveMQComponent.activeMQComponent("tcp://localhost:61616"));
ActiveMQComponent answer = ActiveMQComponent.activeMQComponent("tcp://localhost:5671");
if (answer.getConfiguration() instanceof ActiveMQConfiguration) {
((ActiveMQConfiguration) answer.getConfiguration()).setUserName("admin");
((ActiveMQConfiguration) answer.getConfiguration()).setPassword("admin");
}
context.addComponent("outboundQueue", answer);
context.addRoutes(new RouteBuilder() {
public void configure() {
from("inboundQueue:queue:localQ").to("outboundQueue:queue:remoteQ");
}
});
context.start();
Thread.sleep(60 * 5 * 1000);
context.stop();
}
}
Producer does not provide any kind of retry mechanism like consumer. You need to make sure in your code that message sent by producer acknowledge by broker.

Spring Boot with Apache Kafka: Messages not being read

I am currently setting up a Spring Boot application with Kafka listener.
I am trying to code only the consumer. For producer, I am manually sending message from the Kafka console for now.
I followed the example:
http://www.source4code.info/2016/09/spring-kafka-consumer-producer-example.html
I tried running this as a Spring Boot application but not able to see any messages being received. There are already some messages in my local topic of Kafka.
C:\software\kafka_2.11-0.10.1.0\kafka_2.11-0.10.1.0\kafka_2.11-0.10.1.0\bin\wind
ows>kafka-console-producer.bat --broker-list localhost:9092 --topic test
this is a message
testing again
My Spring Boot application is:
#EnableDiscoveryClient
#SpringBootApplication
public class KafkaApplication {
/**
* Run the application using Spring Boot and an embedded servlet engine.
*
* #param args
* Program arguments - ignored.
*/
public static void main(String[] args) {
// Tell server to look for registration.properties or registration.yml
System.setProperty("spring.config.name", "kafka-server");
SpringApplication.run(KafkaApplication.class, args);
}
}
And Kafka configuration is:
package kafka;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.IntegerDeserializer;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import java.util.HashMap;
import java.util.Map;
#Configuration
#EnableKafka
public class KafkaConsumerConfig {
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
//factory.setConcurrency(1);
//factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> propsMap = new HashMap();
propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
//propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
//propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
//propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
//propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, "group1");
//propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return propsMap;
}
#Bean
public Listener listener() {
return new Listener();
}
}
And Kafka listener is:
package kafka;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import java.util.concurrent.CountDownLatch;
import java.util.logging.Logger;
public class Listener {
protected Logger logger = Logger.getLogger(Listener.class
.getName());
public CountDownLatch getCountDownLatch1() {
return countDownLatch1;
}
private CountDownLatch countDownLatch1 = new CountDownLatch(1);
#KafkaListener(topics = "test")
public void listen(ConsumerRecord<?, ?> record) {
logger.info("Received message: " + record);
System.out.println("Received message: " + record);
countDownLatch1.countDown();
}
}
I am trying this for the first time. Please let me know if I am doing anything wrong. Any help will be greatly appreciated.
You did not set ConsumerConfig.AUTO_OFFSET_RESET_CONFIG so the default is "latest". Set it to "earliest" so the consumer will receive messages already in the topic.
ConsumerConfig.AUTO_OFFSET_RESET_CONFIG takes effect only if the consumer group does not already have an offset for a topic partition. If you already ran the consumer with the "latest" setting, then running the consumer again with a different setting does not change the offset. The consumer must use a different group so Kafka will assign offsets for that group.
Observed that you dit comment out the consumer group.id property.
//propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, "group1");
Let's see how is quoted in the Kafka official document:
A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using subscribe(topic) or the Kafka-based offset management strategy.
Tried to uncomement that row and the consumer worked.
You will need to annotate your Listener class with either #Service or #Component so that Spring Boot can load the Kafka listener.
package kafka;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import java.util.concurrent.CountDownLatch;
import java.util.logging.Logger;
#Component
public class Listener {
protected Logger logger = Logger.getLogger(Listener.class
.getName());
public CountDownLatch getCountDownLatch1() {
return countDownLatch1;
}
private CountDownLatch countDownLatch1 = new CountDownLatch(1);
#KafkaListener(topics = "test")
public void listen(ConsumerRecord<?, ?> record) {
logger.info("Received message: " + record);
System.out.println("Received message: " + record);
countDownLatch1.countDown();
}
}
The above suggestions are good. If you have followed all of them but it did not work, please check if lazy loading is set to false for your application.
The lazy loading is false by default. However if your application had explicit setting like the one below,
spring.main.lazy-initialization=true
Please comment it or make it to false

Resources