How to periodically publish a message to activemq spring integration DSL - spring-boot

I installed Active MQ and periodically say every 10 seconds want to send a message to "my.queue"
I'm struggling to comprehend Spring Integration DSL language.
I need something like
IntegrationFlows.from(every 5 seconds)
.send(message to "my.queue")

Yes, you can do that with Spring Integration Java DSL and its IntegrationFlow abstraction. To make a periodic task you need to use this factory in the IntegrationFlows to start the flow:
/**
* Provides {#link Supplier} as source of messages to the integration flow.
* which will be triggered by a <b>provided</b>
* {#link org.springframework.integration.endpoint.SourcePollingChannelAdapter}.
* #param messageSource the {#link Supplier} to populate.
* #param endpointConfigurer the {#link Consumer} to provide more options for the
* {#link org.springframework.integration.config.SourcePollingChannelAdapterFactoryBean}.
* #param <T> the supplier type.
* #return new {#link IntegrationFlowBuilder}.
* #see Supplier
*/
public static <T> IntegrationFlowBuilder fromSupplier(Supplier<T> messageSource,
Consumer<SourcePollingChannelAdapterSpec> endpointConfigurer) {
The Supplier may return an object you'd like to send as a payload downstream. The second consumer arg can be configured with the:
.poller(p -> p.fixedDelay(1000))
This way every second a message is going to be created from the supplied payload and sent downstream.
To send a message ti Active MQ, you need to use a org.springframework.integration.jms.dsl.Jms and its method for respective channel adapter:
/**
* The factory to produce a {#link JmsOutboundChannelAdapterSpec}.
* #param connectionFactory the JMS ConnectionFactory to build on
* #return the {#link JmsOutboundChannelAdapterSpec} instance
*/
public static JmsOutboundChannelAdapterSpec.JmsOutboundChannelSpecTemplateAware outboundAdapter(
ConnectionFactory connectionFactory) {
The result of this factory has to be used in the DSL callback like:
/**
* Populate a {#link ServiceActivatingHandler} for the provided
* {#link MessageHandler} implementation.
* Can be used as Java 8 Lambda expression:
* <pre class="code">
* {#code
* .handle(m -> logger.info(m.getPayload())
* }
* </pre>
* #param messageHandler the {#link MessageHandler} to use.
* #return the current {#link BaseIntegrationFlowDefinition}.
*/
public B handle(MessageHandler messageHandler) {
All the info is present in the docs: https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl
Something like this:
#Bean
public IntegrationFlow jmsPeriodicFlow() {
return IntegrationFlows.fromSupplier(() -> "hello",
e -> e.poller(p -> p.fixedDelay(5000)))
.handle(Jms.outboundAdapter(jmsConnectionFactory())
.destination("my.queue"))
.get();
}

Related

Kafka ConsumerInterceptor onCommit not being called when using transactions

I'm using Spring Kafka in a Spring Boot application. I'm attempting to use a Kafka ConsumerInterceptor to intercept when offsets are committed.
This seems to work producers transactions are not enabled but transactions are turned on Interceptor::onCommit is no longer called.
The following minimal example everything works as expected:
#SpringBootApplication
#EnableKafka
class Application {
#KafkaListener(topics = ["test"])
fun onMessage(message: String) {
log.warn("onMessage: $message")
}
Interceptor:
class Interceptor : ConsumerInterceptor<String, String> {
override fun onCommit(offsets: MutableMap<TopicPartition, OffsetAndMetadata>) {
log.warn("onCommit: $offsets")
}
override fun onConsume(records: ConsumerRecords<String, String>): ConsumerRecords<String, String> {
log.warn("onConsume: $records")
return records
}
}
Application config:
spring:
kafka:
consumer:
enable-auto-commit: false
auto-offset-reset: earliest
properties:
"interceptor.classes": com.example.Interceptor
group-id: test-group
listener:
ack-mode: record
Inside a test using #EmbeddedKafka:
#Test
fun sendMessage() {
kafkaTemplate.send("test", "id", "sent message").get() // block so we don't end before the consumer gets the message
}
This outputs what I would expect:
onConsume: org.apache.kafka.clients.consumer.ConsumerRecords#6a646f3c
onMessage: sent message
onCommit: {test-0=OffsetAndMetadata{offset=1, leaderEpoch=null, metadata=''}}
However, when I enabled transactions by providing a transaction-id-prefix the Interceptor's onCommit is no longer called.
My updated config only adds:
spring:
kafka:
producer:
transaction-id-prefix: tx-id-
And the test is updated to wrap send in a transaction:
#Test
fun sendMessage() {
kafkaTemplate.executeInTransaction {
kafkaTemplate.send("test", "a", "sent message").get()
}
}
With this change my log output is now only
onConsume: org.apache.kafka.clients.consumer.ConsumerRecords#738b5968
onMessage: sent message
The Interceptor's onConsume method is called and the #KafkaListener receives the message but onCommit is never called.
Does anyone happen to know whats happening here? Are my expectations about what I should see here incorrect?
Offsets are not committed via the consumer when using transactions (exactly once semantics). Instead, the offset is committed via the producer.
KafkaProducer...
/**
* Sends a list of specified offsets to the consumer group coordinator, and also marks
* those offsets as part of the current transaction. These offsets will be considered
* committed only if the transaction is committed successfully. The committed offset should
* be the next message your application will consume, i.e. lastProcessedMessageOffset + 1.
* <p>
* This method should be used when you need to batch consumed and produced messages
* together, typically in a consume-transform-produce pattern. Thus, the specified
* {#code groupMetadata} should be extracted from the used {#link KafkaConsumer consumer} via
* {#link KafkaConsumer#groupMetadata()} to leverage consumer group metadata. This will provide
* stronger fencing than just supplying the {#code consumerGroupId} and passing in {#code new ConsumerGroupMetadata(consumerGroupId)},
* however note that the full set of consumer group metadata returned by {#link KafkaConsumer#groupMetadata()}
* requires the brokers to be on version 2.5 or newer to understand.
*
* <p>
* Note, that the consumer should have {#code enable.auto.commit=false} and should
* also not commit offsets manually (via {#link KafkaConsumer#commitSync(Map) sync} or
* {#link KafkaConsumer#commitAsync(Map, OffsetCommitCallback) async} commits).
* This method will raise {#link TimeoutException} if the producer cannot send offsets before expiration of {#code max.block.ms}.
* Additionally, it will raise {#link InterruptException} if interrupted.
*
* #throws IllegalStateException if no transactional.id has been configured or no transaction has been started.
* #throws ProducerFencedException fatal error indicating another producer with the same transactional.id is active
* #throws org.apache.kafka.common.errors.UnsupportedVersionException fatal error indicating the broker
* does not support transactions (i.e. if its version is lower than 0.11.0.0) or
* the broker doesn't support latest version of transactional API with all consumer group metadata
* (i.e. if its version is lower than 2.5.0).
* #throws org.apache.kafka.common.errors.UnsupportedForMessageFormatException fatal error indicating the message
* format used for the offsets topic on the broker does not support transactions
* #throws org.apache.kafka.common.errors.AuthorizationException fatal error indicating that the configured
* transactional.id is not authorized, or the consumer group id is not authorized.
* #throws org.apache.kafka.clients.consumer.CommitFailedException if the commit failed and cannot be retried
* (e.g. if the consumer has been kicked out of the group). Users should handle this by aborting the transaction.
* #throws org.apache.kafka.common.errors.FencedInstanceIdException if this producer instance gets fenced by broker due to a
* mis-configured consumer instance id within group metadata.
* #throws org.apache.kafka.common.errors.InvalidProducerEpochException if the producer has attempted to produce with an old epoch
* to the partition leader. See the exception for more details
* #throws KafkaException if the producer has encountered a previous fatal or abortable error, or for any
* other unexpected error
* #throws TimeoutException if the time taken for sending offsets has surpassed max.block.ms.
* #throws InterruptException if the thread is interrupted while blocked
*/
public void sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata> offsets,
ConsumerGroupMetadata groupMetadata) throws ProducerFencedException {

how to set maximum number of connection retries with Spring AMQP

I have a scenario where my rabbit mq instance is not always available and would like to set the maximum number of times a connection retry happens, Is this possible with amqp?
Example,
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory factory = new CachingConnectionFactory();
factory.setUri("amqprl//");
factory ../ try uri connection for 4 times max then fail if still no connection
return factory;
}
Message producers will only try to create a connection when you send a message.
Message consumers (container factories) will retry indefinitely.
You can add a ConnectionListener to the connection factory and stop() the listener containers after some number of failures.
#FunctionalInterface
public interface ConnectionListener {
/**
* Called when a new connection is established.
* #param connection the connection.
*/
void onCreate(Connection connection);
/**
* Called when a connection is closed.
* #param connection the connection.
* #see #onShutDown(ShutdownSignalException)
*/
default void onClose(Connection connection) {
}
/**
* Called when a connection is force closed.
* #param signal the shut down signal.
* #since 2.0
*/
default void onShutDown(ShutdownSignalException signal) {
}
/**
* Called when a connection couldn't be established.
* #param exception the exception thrown.
* #since 2.2.17
*/
default void onFailed(Exception exception) {
}
}

How to disable logging all messages in a Kafka batch in case of an exception?

When using #KafkaListener with batches, the error handler logs the content of the full batch (all messages) in case of an exception.
How can I make this less verbose? I'd like to avoid spamming the log files with all the messages and only see the actual exception.
Here is a minimal example of how my consumer currently looks like:
#Component
class TestConsumer {
#Bean
fun kafkaBatchListenerContainerFactory(kafkaProperties: KafkaProperties): ConcurrentKafkaListenerContainerFactory<String, String> {
val configs = kafkaProperties.buildConsumerProperties()
configs[ConsumerConfig.MAX_POLL_RECORDS_CONFIG] = 10000
val factory = ConcurrentKafkaListenerContainerFactory<String, String>()
factory.consumerFactory = DefaultKafkaConsumerFactory(configs)
factory.isBatchListener = true
return factory
}
#KafkaListener(
topics = ["myTopic"],
containerFactory = "kafkaBatchListenerContainerFactory"
)
fun batchListen(values: List<ConsumerRecord<String, String>>) {
// Something that might throw an exception in rare cases.
}
}
What version are you using?
This container property was added in 2.2.14.
/**
* Set to false to log {#code record.toString()} in log messages instead
* of {#code topic-partition#offset}.
* #param onlyLogRecordMetadata false to log the entire record.
* #since 2.2.14
*/
public void setOnlyLogRecordMetadata(boolean onlyLogRecordMetadata) {
this.onlyLogRecordMetadata = onlyLogRecordMetadata;
}
It has been true by default since version 2.7 (which is why the javadocs now read that way).
This was the previous javadoc:
/**
* Set to true to only log {#code topic-partition#offset} in log messages instead
* of {#code record.toString()}.
* #param onlyLogRecordMetadata true to only log the topic/parrtition/offset.
* #since 2.2.14
*/
Also, starting with version 2.5, you can set the log level on the error handler:
/**
* Set the level at which the exception thrown by this handler is logged.
* #param logLevel the level (default ERROR).
*/
public void setLogLevel(KafkaException.Level logLevel) {
Assert.notNull(logLevel, "'logLevel' cannot be null");
this.logLevel = logLevel;
}

Multiple Kafka Producer Instance for each Http Request

I have a rest end point which can be invoked by multiple users at same time. This rest end point invokes a Transactional Kafka Producer.
What I understand is I cant use same Kafka Producer instance at same time if we use Transaction.
How can I create a new Kafka Producer Instance for each HTTP request efficiently ?
//Kafka Transaction enabled
producerProps.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
producerProps.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "prod-1-" );
#Service
public class ProducerService {
#Autowired
private KafkaTemplate<Object, Object> kafkaTemplate;
public void postMessage(final MyUser message) {
// wrapping the send method in a transaction
this.kafkaTemplate.executeInTransaction(kafkaTemplate -> {
kafkaTemplate.send("custom", null, message);
}
}
See the javadocs for the DefaultKafkaProducerFactory. It maintains a cache of producers for producer-initiated transactions.
/**
* The {#link ProducerFactory} implementation for a {#code singleton} shared {#link Producer} instance.
* <p>
* This implementation will return the same {#link Producer} instance (if transactions are
* not enabled) for the provided {#link Map} {#code configs} and optional {#link Serializer}
* implementations on each {#link #createProducer()} invocation.
...
* Setting {#link #setTransactionIdPrefix(String)} enables transactions; in which case, a
* cache of producers is maintained; closing a producer returns it to the cache. The
* producers are closed and the cache is cleared when the factory is destroyed, the
* application context stopped, or the {#link #reset()} method is called.
...
*/

How to aggregate messages with the identical headers in Spring Integration

public static void main(String[] args) throws InterruptedException {
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
ctx.register(Main.class);
ctx.refresh();
DirectChannel channel1 = ctx.getBean("channel1", DirectChannel.class);
ctx.getBean("channel2", PublishSubscribeChannel.class).subscribe(message ->
System.out.println("Output: " + message));
channel1.send(MessageBuilder.withPayload("p1")
.setHeader(CORRELATION_ID, 1)
.setHeader(SEQUENCE_SIZE,2)
.setHeader(SEQUENCE_NUMBER,1)
.setHeader("a", 1)
.build());
channel1.send(MessageBuilder.withPayload("p2")
.setHeader(CORRELATION_ID, 1)
.setHeader(SEQUENCE_SIZE,2)
.setHeader(SEQUENCE_NUMBER,2)
.setHeader("a", 2)
.build());
}
#Bean
public MessageChannel channel1() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel channel2() {
return MessageChannels.publishSubscribe().get();
}
#Bean
public IntegrationFlow flow1() {
return IntegrationFlows
.from("channel1")
.aggregate(a -> a
.releaseStrategy(new SequenceSizeReleaseStrategy())
.expireGroupsUponCompletion(true)
.sendPartialResultOnExpiry(true))
.channel("channel2")
.get();
}
Output: GenericMessage [payload=[p1, p2], headers={sequenceNumber=2, a=2, correlationId=1, id=b5e51041-c967-1bb4-1601-7e468ae28527, sequenceSize=2, timestamp=1580475773518}]
Headers "a" and "sequenceNumber" were overwritten.
How to aggregate messages with the identical headers?
It must be so
Output: GenericMessage [payload=[p1, p2], headers={sequenceNumber=[1,2], a=[1, 2], correlationId=1, id=b5e51041-c967-1bb4-1601-7e468ae28527, sequenceSize=2, timestamp=1580475773518}]
See AbstractAggregatingMessageGroupProcessor:
/**
* Specify a {#link Function} to map {#link MessageGroup} into composed headers for output message.
* #param headersFunction the {#link Function} to use.
* #since 5.2
*/
public void setHeadersFunction(Function<MessageGroup, Map<String, Object>> headersFunction) {
and also:
/**
* The {#link Function} implementation for a default headers merging in the aggregator
* component. It takes all the unique headers from all the messages in group and removes
* those which are conflicted: have different values from different messages.
*
* #author Artem Bilan
*
* #since 5.2
*
* #see AbstractAggregatingMessageGroupProcessor
*/
public class DefaultAggregateHeadersFunction implements Function<MessageGroup, Map<String, Object>> {
Or just long existing:
/**
* This default implementation simply returns all headers that have no conflicts among the group. An absent header
* on one or more Messages within the group is not considered a conflict. Subclasses may override this method with
* more advanced conflict-resolution strategies if necessary.
* #param group The message group.
* #return The aggregated headers.
*/
protected Map<String, Object> aggregateHeaders(MessageGroup group) {
So, what you need in your aggregate() configuration is an outputProcessor(MessageGroupProcessor outputProcessor) option.
See docs for more info: https://docs.spring.io/spring-integration/docs/5.2.3.RELEASE/reference/html/message-routing.html#aggregatingmessagehandler

Resources