I have a service that reads all messages from AWS SQS.
#Slf4j
#Configuration
#EnableJms
public class JmsConfig {
private SQSConnectionFactory connectionFactory;
public JmsConfig(
#Value("${amazon.sqs.accessKey}") String awsAccessKey,
#Value("${amazon.sqs.secretKey}") String awsSecretKey,
#Value("${amazon.sqs.region}") String awsRegion,
#Value("${amazon.sqs.endpoint}") String awsEndpoint) {
connectionFactory = new SQSConnectionFactory(
new ProviderConfiguration(),
AmazonSQSClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(
new BasicAWSCredentials(awsAccessKey, awsSecretKey)))
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(awsEndpoint, awsRegion))
.build());
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(this.connectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
factory.setConcurrency("3-10");
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setReceiveTimeout(2000L); //??????????
return factory;
}
#Bean
public JmsTemplate defaultJmsTemplate() {
return new JmsTemplate(this.connectionFactory);
}
I've heard about long polling so I wonder how I could use it in my case. I wonder how this listener works - I do not want to create unnecessary calls to the AWS SQS.
My listener that reads messages and converts them to the Object and saves on Redis db:
#JmsListener(destination = "${amazon.sqs.destination}")
public void receive(String requestJSON) throws JMSException {
log.info("Received");
try {
Trace trace = Trace.fromJSON(requestJSON);
traceRepository.save(trace);
(...)
I'd like to know your opinions - what is the best approach to minimalize unnecessary calls to SQS to get messages.
Maybe shoud I use for example
factory.setReceiveTimeout(2000L);
Unfortunately there is too little information in Internet about it
Thanks,
Matthew
Related
My spring boot service needs to consume kafka events off one topic, do some processing (including writing to the db with JPA) and then produce some events on a new topic. No matter what happens I cannot have a situation where I have published events without updating the database, and if anything goes wrong then I want the next poll of the consumer to retry the event. My processing logic including the db update is idempotent so retrying that is fine
I think I have achieved exactly once semantics as described on https://docs.spring.io/spring-kafka/reference/html/#exactly-once by using a ChainedKafkaTransactionManager like so:
#Bean
public ChainedKafkaTransactionManager chainedTransactionManager(JpaTransactionManager jpa, KafkaTransactionManager<?, ?> kafka) {
kafka.setTransactionSynchronization(SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return new ChainedKafkaTransactionManager(kafka, jpa);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory,
ChainedKafkaTransactionManager chainedTransactionManager) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
factory.getContainerProperties().setTransactionManager(chainedTransactionManager);
return factory;
}
The relevant kafka config in my application.yaml file looks like:
kafka:
...
consumer:
group-id: myGroupId
auto-offset-reset: earliest
properties:
isolation.level: read_committed
...
producer:
transaction-id-prefix: ${random.uuid}
...
Because the commit order is critical to my application I would like to write a integration test to prove that the commits happen in the desired order and that if an error occurs during the commit to kafka then the original event is consumed again. However I am struggling to find a good way of causing a failure between the db commit and the kafka commit.
Any suggestions or alternative ways I could do this?
Thanks
You could use a custom ProducerFactory to return a MockProducer (provided by kafka-clients.
Set the commitTransactionException so that it is thrown when the KTM tries to commit the transaction.
EDIT
Here is an example; it doesn't use the chained TM, but that shouldn't make a difference.
#SpringBootApplication
public class So66018178Application {
public static void main(String[] args) {
SpringApplication.run(So66018178Application.class, args);
}
#KafkaListener(id = "so66018178", topics = "so66018178")
public void listen(String in) {
System.out.println(in);
}
}
spring.kafka.producer.transaction-id-prefix=tx-
spring.kafka.consumer.auto-offset-reset=earliest
#SpringBootTest(classes = { So66018178Application.class, So66018178ApplicationTests.Config.class })
#EmbeddedKafka(bootstrapServersProperty = "spring.kafka.bootstrap-servers")
class So66018178ApplicationTests {
#Autowired
EmbeddedKafkaBroker broker;
#Test
void kafkaCommitFails(#Autowired KafkaListenerEndpointRegistry registry, #Autowired Config config)
throws InterruptedException {
registry.getListenerContainer("so66018178").stop();
AtomicReference<Exception> listenerException = new AtomicReference<>();
CountDownLatch latch = new CountDownLatch(1);
((ConcurrentMessageListenerContainer<String, String>) registry.getListenerContainer("so66018178"))
.setAfterRollbackProcessor(new AfterRollbackProcessor<>() {
#Override
public void process(List<ConsumerRecord<String, String>> records, Consumer<String, String> consumer,
Exception exception, boolean recoverable) {
listenerException.set(exception);
latch.countDown();
}
});
registry.getListenerContainer("so66018178").start();
Map<String, Object> props = KafkaTestUtils.producerProps(this.broker);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf = new DefaultKafkaProducerFactory<>(props);
KafkaTemplate<String, String> template = new KafkaTemplate<>(pf);
template.send("so66018178", "test");
assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(listenerException.get()).isInstanceOf(ListenerExecutionFailedException.class)
.hasCause(config.exception);
}
#Configuration
public static class Config {
RuntimeException exception = new RuntimeException("test");
#Bean
public ProducerFactory<Object, Object> pf() {
return new ProducerFactory<>() {
#Override
public Producer<Object, Object> createProducer() {
MockProducer<Object, Object> mockProducer = new MockProducer<>();
mockProducer.commitTransactionException = Config.this.exception;
return mockProducer;
}
#Override
public Producer<Object, Object> createProducer(String txIdPrefix) {
Producer<Object, Object> producer = createProducer();
producer.initTransactions();
return producer;
}
#Override
public boolean transactionCapable() {
return true;
}
};
}
}
}
Do not use ChainedKafkaTransactionManager anymore, it is deprecated.
according to docs:
https://docs.spring.io/spring-kafka/reference/html/#container-transaction-manager
"The ChainedKafkaTransactionManager is now deprecated, since version 2.7; see the javadocs for its super class ChainedTransactionManager for more information. Instead, use a KafkaTransactionManager in the container to start the Kafka transaction and annotate the listener method with #Transactional to start the other transaction."
In my tests, where I tried to simulate exception in Producer after DB transaction committed, I simply left mandatory field empty in Kafka event (used Avro schema), and in the second test I deleted the topic for producing with the help of Kafka Admin. And then I wrote some asserts to verify that Kafka Listener was called again, when retrying.
I'm using Kafka with Spring Boot. I use Rest Controllers to call Producer/Consumer API's. Producer Class is able to add messages to the topic. I verified using command line utility (Console-consumer.sh). However my Consumer class is not able to receive them in Java for further processing.
#KafkaListener used in Consumer class listener method should be able to receive messages when my Producer class posts messages to the topic which is not happening. Any help appreciated.
Is it still necessary for consumer to subscribe and poll for records when I have already created KafkaListenerContainerFactory that is responsible for invoking Consumer Listener method when a message is posted to the topic?
Consumer Class
#Component
public class KafkaListenersExample {
private final List<KafkaPayload> messages = new ArrayList<>();
#KafkaListener(topics = "test_topic", containerFactory = "kafkaListenerContainerFactory")
public void listener(KafkaPayload data) {
synchronized (messages){
messages.add(data);
}
//System.out.println("message from kafka :"+data);
}
public List<KafkaPayload> getMessages(){
return messages;
}
}
Consumer Config
#Configuration
class KafkaConsumerConfig {
private String bootstrapServers = "localhost:9092";
#Bean
public ConsumerFactory<String, KafkaPayload> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props) ;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, KafkaPayload> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, KafkaPayload> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerConfigs());
return factory;
}
}
The listener container creates the consumer, subscribes, and takes care of the polling.
Turning on DEBUG logging should help determine what's wrong.
If the records are already in the topic, you need to set ConsumerConfig.AUTO_OFFSET_RESET_CONFIG to earliest. Otherwise, the consumer starts consuming from the end of the topic (latest).
I'm using Apache ActiveMQ 5.15.13 and Spring Boot 2.3.1.RELEASE. I'm trying to configure durable subscriber, but I'm not able do do. My application on runtime gives me an error as
Cause: setClientID call not supported on proxy for shared Connection. Set the 'clientId' property on the SingleConnectionFactory instead.
Below is the complete ActiveMQ setup with Spring Boot.
JMSConfiguration
public class JMSConfiguration
{
#Bean
public JmsListenerContainerFactory<?> connectionFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer)
{
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
// This provides all boot's default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
// You could still override some of Boot's default if necessary.
factory.setPubSubDomain(true);
/* below config to set durable subscriber */
factory.setClientId("brokerClientId");
factory.setSubscriptionDurable(true);
// factory.setSubscriptionShared(true);
return factory;
}
#Bean
public MessageConverter jacksonJmsMessageConverter()
{
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName("_type");
return converter;
}
}
Receiver Class
public class Receiver {
private static final String MESSAGE_TOPIC = "message_topic";
private Logger logger = LoggerFactory.getLogger(Receiver.class);
private static AtomicInteger id = new AtomicInteger();
#Autowired
ConfirmationReceiver confirmationReceiver;
#JmsListener(destination = MESSAGE_TOPIC,
id = "comercial",
subscription = MESSAGE_TOPIC,
containerFactory = "connectionFactory")
public void receiveMessage(Product product, Message message)
{
logger.info(" >> Original received message: " + message);
logger.info(" >> Received product: " + product);
System.out.println("Received " + product);
confirmationReceiver.sendConfirmation(new Confirmation(id.incrementAndGet(), "User " +
product.getName() + " received."));
}
}
application.properties
spring.jms.pub-sub-domain=true
spring.jms.listener.concurrency=1
spring.jms.listener.max-concurrency=2
spring.jms.listener.acknowledge-mode=auto
spring.jms.listener.auto-startup=true
spring.jms.template.delivery-mode:persistent
spring.jms.template.priority: 100
spring.jms.template.qos-enabled: true
spring.jms.template.receive-timeout: 1000
spring.jms.template.time-to-live: 36000
When i try to run application it gives me error as below
Could not refresh JMS Connection for destination 'message_topic' - retrying using FixedBackOff{interval=5000, currentAttempts=1, maxAttempts=unlimited}. Cause: setClientID call not supported on proxy for shared Connection. Set the 'clientId' property on the SingleConnectionFactory instead.
My application has standalone producer and consumer. I did try to Google the error but nothing helped.
Late answer. Here is what worked for me.
Use SingleConnectionFactory in place of ConnectionFactory
Set client-id to SingleConnectionFactory
Do not set client-id to factory
public JmsListenerContainerFactory<?> connectionFactory(SingleConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
// Add this
connectionFactory.setClientId("your-client-id")
// Do not do this
//factory.setClientId("brokerClientId");
I have configured my rabbit properties via application.yaml and spring configurationProperties.
Thus, when I configure exchanges, queues and bindings, I can use the getters of my properties
#Bean Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(properties.getQueue());
}
#Bean Queue queue() {
return new Queue(properties.getQueue(), true);
}
#Bean TopicExchange exchange() {
return new TopicExchange(properties.getExchange());
}
However, when I configure a #RabbitListener to log the messages on from the queue, I have to use the full properties name like
#RabbitListener(queues = "${some.long.path.to.the.queue.name}")
public void onMessage(
final Message message, final Channel channel) throws Exception {
log.info("receiving message: {}#{}", message, channel);
}
I want to avoid this error prone hard coded String and refer to the configurationProperties bean like:
#RabbitListener(queues = "${properties.getQueue()}")
I had a similar issue once with #EventListener where using a bean reference "#bean.method()" helped, but it does not work here, the bean expression is just interpreted as queue name, which fails because a queue namde "#bean...." does not exist.
Is it possible to use ConfigurationProperty-Beans for RabbitListener queue configuration?
Something like this worked for me where I just used the Bean and SpEL.
#Autowired
Queue queue;
#RabbitListener(queues = "#{queue.getName()}")
I was finally able to accomplish what we both desired to do by taking what #David Diehl suggested, using the bean and SpEL; however, using MyRabbitProperties itself instead. I removed the #EnableConfigurationProperties(MyRabbitProperties.class) in the config class, and registered the bean the standard way:
#Configuration
//#EnableConfigurationProperties(RabbitProperties.class)
#EnableRabbit
public class RabbitConfig {
//private final MyRabbitProperties myRabbitProperties;
//#Autowired
//public RabbitConfig(MyRabbitProperties myRabbitProperties) {
//this.myRabbitProperties = myRabbitProperties;
//}
#Bean
public TopicExchange myExchange(MyRabbitProperties myRabbitProperties) {
return new TopicExchange(myRabbitProperties.getExchange());
}
#Bean
public Queue myQueueBean(MyRabbitProperties myRabbitProperties) {
return new Queue(myRabbitProperties.getQueue(), true);
}
#Bean
public Binding binding(Queue myQueueBean, TopicExchange myExchange, MyRabbitProperties myRabbitProperties) {
return BindingBuilder.bind(myQueueBean).to(myExchange).with(myRabbitProperties.getRoutingKey());
}
#Bean
public MyRabbitProperties myRabbitProperties() {
return new MyRabbitProperties();
}
}
From there, you can access the get method for that field:
#Component
public class RabbitQueueListenerClass {
#RabbitListener(queues = "#{myRabbitProperties.getQueue()}")
public void processMessage(Message message) {
}
}
#RabbitListener(queues = "#{myQueue.name}")
Listener:
#RabbitListener(queues = "${queueName}")
application.properties:
queueName=myQueue
I have an application that publishes a message using Spring AMQP’s RabbitTemplate and subscribes to the message on a POJO using MessageListenerAdapter, pretty much as per the Getting Started - Messaging with RabbitMQ guide.
void handleMessage(final String message) {...}
rabbitTemplate.convertAndSend(EXCHANGE_NAME, QUEUE_NAME, "message");
However, this is sending messages as String's; surely there is a way to send and receive Object messages directly?
I have tried registering a JsonMessageConverter but to no avail.
Any help would be greatly appreciated - at present I'm manually de/serialising Strings on either side which seems messy and I'm surprised this isn't a supported feature.
I have tried registering a JsonMessageConverter but to no avail.
It would be better to see your attempt and figure out the issue on our side.
Right now I only can say that you should supply JsonMessageConverter for both sending and receiving parts.
I've just tested with the gs-messaging-rabbitmq:
#Autowired
RabbitTemplate rabbitTemplate;
#Autowired
MessageConverter messageConverter;
.....
#Bean
MessageListenerAdapter listenerAdapter(Receiver receiver, MessageConverter messageConverter) {
MessageListenerAdapter adapter = new MessageListenerAdapter(receiver, "receiveMessage");
adapter.setMessageConverter(messageConverter);
return adapter;
}
#Bean
MessageConverter messageConverter() {
return new Jackson2JsonMessageConverter();
}
.....
System.out.println("Sending message...");
rabbitTemplate.setMessageConverter(messageConverter);
rabbitTemplate.convertAndSend(queueName, new Foo("Hello from RabbitMQ!"));
Where Receiver has been changed to this:
public void receiveMessage(Foo message) {
System.out.println("Received <" + message + ">");
latch.countDown();
}
So, the output is:
Waiting five seconds...
Sending message...
Received <Foo{foo='Hello from RabbitMQ!'}>
when we use Foo like this:
#Override
public String toString() {
return "Foo{" +
"foo='" + foo + '\'' +
'}';
}
with appropriate getter and setter.
Thanks #Artem for the hints. My code is pretty much as per the Getting Started Guide and I had already tried adding a Converter.
But as #Artem has pointed out, the trick is to register the converter with both the container, the listener adapter, and the rabbit template (which was auto-configured in the example).
So my #Configuration class now looks like so, in addition to whatever is mentioned in the Getting Started Guide:
#Bean
SimpleMessageListenerContainer container(final ConnectionFactory connectionFactory, final MessageListenerAdapter messageListenerAdapter,
final MessageConverter messageConverter)
{
final SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(QUEUE_NAME);
container.setMessageListener(messageListenerAdapter);
container.setMessageConverter(messageConverter);
return container;
}
#Bean
RabbitTemplate rabbitTemplate(final ConnectionFactory connectionFactory, final MessageConverter messageConverter)
{
final RabbitTemplate rabbitTemplate = new RabbitTemplate();
rabbitTemplate.setConnectionFactory(connectionFactory);
rabbitTemplate.setMessageConverter(messageConverter);
return rabbitTemplate;
}
#Bean
MessageConverter messageConverter()
{
return new Jackson2JsonMessageConverter();
}
#Bean
Receiver receiver()
{
return new Receiver();
}
#Bean
MessageListenerAdapter listenerAdapter(final Receiver receiver, final MessageConverter messageConverter)
{
return new MessageListenerAdapter(receiver, messageConverter);
}
which means the Receiver can have an Object method signature such as:
void handleMessage(final CbeEvent message)