public static void main(String[] args) throws InterruptedException {
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
ctx.register(Main.class);
ctx.refresh();
DirectChannel channel1 = ctx.getBean("channel1", DirectChannel.class);
ctx.getBean("channel2", PublishSubscribeChannel.class).subscribe(message ->
System.out.println("Output: " + message));
channel1.send(MessageBuilder.withPayload("p1")
.setHeader(CORRELATION_ID, 1)
.setHeader(SEQUENCE_SIZE,2)
.setHeader(SEQUENCE_NUMBER,1)
.setHeader("a", 1)
.build());
channel1.send(MessageBuilder.withPayload("p2")
.setHeader(CORRELATION_ID, 1)
.setHeader(SEQUENCE_SIZE,2)
.setHeader(SEQUENCE_NUMBER,2)
.setHeader("a", 2)
.build());
}
#Bean
public MessageChannel channel1() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel channel2() {
return MessageChannels.publishSubscribe().get();
}
#Bean
public IntegrationFlow flow1() {
return IntegrationFlows
.from("channel1")
.aggregate(a -> a
.releaseStrategy(new SequenceSizeReleaseStrategy())
.expireGroupsUponCompletion(true)
.sendPartialResultOnExpiry(true))
.channel("channel2")
.get();
}
Output: GenericMessage [payload=[p1, p2], headers={sequenceNumber=2, a=2, correlationId=1, id=b5e51041-c967-1bb4-1601-7e468ae28527, sequenceSize=2, timestamp=1580475773518}]
Headers "a" and "sequenceNumber" were overwritten.
How to aggregate messages with the identical headers?
It must be so
Output: GenericMessage [payload=[p1, p2], headers={sequenceNumber=[1,2], a=[1, 2], correlationId=1, id=b5e51041-c967-1bb4-1601-7e468ae28527, sequenceSize=2, timestamp=1580475773518}]
See AbstractAggregatingMessageGroupProcessor:
/**
* Specify a {#link Function} to map {#link MessageGroup} into composed headers for output message.
* #param headersFunction the {#link Function} to use.
* #since 5.2
*/
public void setHeadersFunction(Function<MessageGroup, Map<String, Object>> headersFunction) {
and also:
/**
* The {#link Function} implementation for a default headers merging in the aggregator
* component. It takes all the unique headers from all the messages in group and removes
* those which are conflicted: have different values from different messages.
*
* #author Artem Bilan
*
* #since 5.2
*
* #see AbstractAggregatingMessageGroupProcessor
*/
public class DefaultAggregateHeadersFunction implements Function<MessageGroup, Map<String, Object>> {
Or just long existing:
/**
* This default implementation simply returns all headers that have no conflicts among the group. An absent header
* on one or more Messages within the group is not considered a conflict. Subclasses may override this method with
* more advanced conflict-resolution strategies if necessary.
* #param group The message group.
* #return The aggregated headers.
*/
protected Map<String, Object> aggregateHeaders(MessageGroup group) {
So, what you need in your aggregate() configuration is an outputProcessor(MessageGroupProcessor outputProcessor) option.
See docs for more info: https://docs.spring.io/spring-integration/docs/5.2.3.RELEASE/reference/html/message-routing.html#aggregatingmessagehandler
Related
Can someone please help me understand why a message offset that is manually and immediately committed is re-processed by the KafkaListener when an exception occurs?
So I'm expecting the following behaviour:
I receive an event in Kafka Listener
I commit the offset
An exception occurs
I'm expecting that message not to be reprocessed because the offset was committed.
Not sure if my understanding is correct? Or does Spring rolls-back the manual Acknowledgment that we do in case of exception?
I have the following Listener code:
#KafkaListener(topics = {"${acknowledgement.topic}"}, containerFactory = "concurrentKafkaListenerContainerFactory")
public void onMessage(String message, Acknowledgment acknowledgment) throws InterruptedException {
acknowledgment.acknowledge();
throw new Exception1();
}
And the concurrentKafkaListenerContainerFactory code is:
#Bean
public ConsumerFactory<String, String> consumerFactory() {
kafkaProperties.getConsumer().setEnableAutoCommit(false);
return new DefaultKafkaConsumerFactory<>(kafkaProperties.buildConsumerProperties());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> concurrentKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> concurrentKafkaListenerContainerFactory = new ConcurrentKafkaListenerContainerFactory<>();
concurrentKafkaListenerContainerFactory.setConsumerFactory(consumerFactory());
concurrentKafkaListenerContainerFactory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return concurrentKafkaListenerContainerFactory;
}
Yes, the default error handler treats any exception as retryable by default, regardless of whether its offset has been committed.
You should either not throw an exception, or tell the DefaultErrorHandler which exception(s) should not be retried.
/**
* Add exception types to the default list. By default, the following exceptions will
* not be retried:
* <ul>
* <li>{#link DeserializationException}</li>
* <li>{#link MessageConversionException}</li>
* <li>{#link ConversionException}</li>
* <li>{#link MethodArgumentResolutionException}</li>
* <li>{#link NoSuchMethodException}</li>
* <li>{#link ClassCastException}</li>
* </ul>
* All others will be retried, unless {#link #defaultFalse()} has been called.
* #param exceptionTypes the exception types.
* #see #removeClassification(Class)
* #see #setClassifications(Map, boolean)
*/
public final void addNotRetryableExceptions(Class<? extends Exception>...
exceptionTypes) {
Whenever I try to read content of a MimeMessage I get the "java.lang.IllegalStateException: Folder is not Open" exception.
I have a very simple service handling the received message:
#Service
public class ReceiveMailService {
private final Logger log = LoggerFactory.getLogger(ReceiveMailService.class);
public void handleReceivedMail(MimeMessage receivedMessage) {
try {
log.debug("{}", receivedMessage.getContent());
MimeMessageParser mimeMessageParser = new MimeMessageParser(receivedMessage).parse(); // it breaks here
doMyStuff(mimeMessageParser);
} catch (Exception e) {
log.error(e.getMessage(), e);
}
}
}
Here's my configuration class:
#Configuration
#EnableIntegration
public class MailReceiverConfiguration {
private static final Logger log = LoggerFactory.getLogger(MailReceiverConfiguration.class);
#Value("${spring.mail.pop3.host}")
private String host;
#Value("${spring.mail.pop3.port}")
private Integer port;
#Value("${spring.mail.username}")
private String username;
#Value("${spring.mail.password}")
private String password;
private final ReceiveMailService receiveMailService;
public MailReceiverConfiguration(ReceiveMailService receiveMailService) {
this.receiveMailService = receiveMailService;
}
#Bean
public IntegrationFlow mailListener() {
return IntegrationFlows
.from(Mail
.pop3InboundAdapter(host, port, username, password)
.javaMailProperties(p -> {
p.put("mail.debug", "false");
p.put("mail.pop3.socketFactory.fallback", "false");
p.put("mail.pop3.port", port);
p.put("mail.pop3.socketFactory.class", "javax.net.ssl.SSLSocketFactory");
p.put("mail.pop3.socketFactory.port", port);
})
.maxFetchSize(10)
.shouldDeleteMessages(false),
e -> e.poller(Pollers.fixedRate(5000).maxMessagesPerPoll(10))
)
.handle(message -> receiveMailService.handleReceivedMail((MimeMessage) message.getPayload()))
.get();
}
}
I have ran out of ideas why this wouldn't work.
See this option for use-cases like yours:
/**
* When configured to {#code false}, the folder is not closed automatically after a fetch.
* It is the target application's responsibility to close it using the
* {#link org.springframework.integration.IntegrationMessageHeaderAccessor#CLOSEABLE_RESOURCE} header
* from the message produced by this channel adapter.
* #param autoCloseFolder set to {#code false} to keep folder opened.
* #return the spec.
* #since 5.2
* #see AbstractMailReceiver#setAutoCloseFolder(boolean)
*/
public S autoCloseFolder(boolean autoCloseFolder) {
The docs is here: https://docs.spring.io/spring-integration/docs/current/reference/html/mail.html#mail-inbound
Starting with version 5.2, the autoCloseFolder option is provided on the mail receiver. Setting it to false doesn’t close the folder automatically after a fetch, but instead an IntegrationMessageHeaderAccessor.CLOSEABLE_RESOURCE header (see MessageHeaderAccessor API for more information) is populated into every message to producer from the channel adapter.
I installed Active MQ and periodically say every 10 seconds want to send a message to "my.queue"
I'm struggling to comprehend Spring Integration DSL language.
I need something like
IntegrationFlows.from(every 5 seconds)
.send(message to "my.queue")
Yes, you can do that with Spring Integration Java DSL and its IntegrationFlow abstraction. To make a periodic task you need to use this factory in the IntegrationFlows to start the flow:
/**
* Provides {#link Supplier} as source of messages to the integration flow.
* which will be triggered by a <b>provided</b>
* {#link org.springframework.integration.endpoint.SourcePollingChannelAdapter}.
* #param messageSource the {#link Supplier} to populate.
* #param endpointConfigurer the {#link Consumer} to provide more options for the
* {#link org.springframework.integration.config.SourcePollingChannelAdapterFactoryBean}.
* #param <T> the supplier type.
* #return new {#link IntegrationFlowBuilder}.
* #see Supplier
*/
public static <T> IntegrationFlowBuilder fromSupplier(Supplier<T> messageSource,
Consumer<SourcePollingChannelAdapterSpec> endpointConfigurer) {
The Supplier may return an object you'd like to send as a payload downstream. The second consumer arg can be configured with the:
.poller(p -> p.fixedDelay(1000))
This way every second a message is going to be created from the supplied payload and sent downstream.
To send a message ti Active MQ, you need to use a org.springframework.integration.jms.dsl.Jms and its method for respective channel adapter:
/**
* The factory to produce a {#link JmsOutboundChannelAdapterSpec}.
* #param connectionFactory the JMS ConnectionFactory to build on
* #return the {#link JmsOutboundChannelAdapterSpec} instance
*/
public static JmsOutboundChannelAdapterSpec.JmsOutboundChannelSpecTemplateAware outboundAdapter(
ConnectionFactory connectionFactory) {
The result of this factory has to be used in the DSL callback like:
/**
* Populate a {#link ServiceActivatingHandler} for the provided
* {#link MessageHandler} implementation.
* Can be used as Java 8 Lambda expression:
* <pre class="code">
* {#code
* .handle(m -> logger.info(m.getPayload())
* }
* </pre>
* #param messageHandler the {#link MessageHandler} to use.
* #return the current {#link BaseIntegrationFlowDefinition}.
*/
public B handle(MessageHandler messageHandler) {
All the info is present in the docs: https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl
Something like this:
#Bean
public IntegrationFlow jmsPeriodicFlow() {
return IntegrationFlows.fromSupplier(() -> "hello",
e -> e.poller(p -> p.fixedDelay(5000)))
.handle(Jms.outboundAdapter(jmsConnectionFactory())
.destination("my.queue"))
.get();
}
When using #KafkaListener with batches, the error handler logs the content of the full batch (all messages) in case of an exception.
How can I make this less verbose? I'd like to avoid spamming the log files with all the messages and only see the actual exception.
Here is a minimal example of how my consumer currently looks like:
#Component
class TestConsumer {
#Bean
fun kafkaBatchListenerContainerFactory(kafkaProperties: KafkaProperties): ConcurrentKafkaListenerContainerFactory<String, String> {
val configs = kafkaProperties.buildConsumerProperties()
configs[ConsumerConfig.MAX_POLL_RECORDS_CONFIG] = 10000
val factory = ConcurrentKafkaListenerContainerFactory<String, String>()
factory.consumerFactory = DefaultKafkaConsumerFactory(configs)
factory.isBatchListener = true
return factory
}
#KafkaListener(
topics = ["myTopic"],
containerFactory = "kafkaBatchListenerContainerFactory"
)
fun batchListen(values: List<ConsumerRecord<String, String>>) {
// Something that might throw an exception in rare cases.
}
}
What version are you using?
This container property was added in 2.2.14.
/**
* Set to false to log {#code record.toString()} in log messages instead
* of {#code topic-partition#offset}.
* #param onlyLogRecordMetadata false to log the entire record.
* #since 2.2.14
*/
public void setOnlyLogRecordMetadata(boolean onlyLogRecordMetadata) {
this.onlyLogRecordMetadata = onlyLogRecordMetadata;
}
It has been true by default since version 2.7 (which is why the javadocs now read that way).
This was the previous javadoc:
/**
* Set to true to only log {#code topic-partition#offset} in log messages instead
* of {#code record.toString()}.
* #param onlyLogRecordMetadata true to only log the topic/parrtition/offset.
* #since 2.2.14
*/
Also, starting with version 2.5, you can set the log level on the error handler:
/**
* Set the level at which the exception thrown by this handler is logged.
* #param logLevel the level (default ERROR).
*/
public void setLogLevel(KafkaException.Level logLevel) {
Assert.notNull(logLevel, "'logLevel' cannot be null");
this.logLevel = logLevel;
}
I'm trying to know if the message passed through specific channel for test or i'd like to get the message from specific channel
So my flow is: controller -> gateway -> ServiceActivator
private final Gateway gateway;
public ResponseEntity<Map<String,String>> submit(String applicationId, ApplicationDto applicationDto) {
applicationDto.setApplicationId(applicationId);
gateway.submitApplication(applicationDto);
return ResponseEntity.ok(Map.of(MESSAGE, "Accepted submit"));
}
the gateway
#Gateway(requestChannel = "submitApplicationChannel", replyChannel = "replySubmitApplicationChannel")
WorkflowPayload submitApplication(ApplicationDto applicationDto);
pipeline
#Bean
MessageChannel submitApplicationChannel() {
return new DirectChannel();
}
So my test is sending a request to start the flow
#Test
#DisplayName("Application Submission")
void submissionTest() throws Exception {
mockMvc.perform(MockMvcRequestBuilders
.post("/api/v1/applications/contract-validation/" + APPLICATION_ID)
.contentType(MediaType.APPLICATION_JSON)
.content(objectMapper.writeValueAsString(payload)))
.andExpect(status().isAccepted())
.andReturn();
//Check HERE if the message passed through the channel
}
Can you give me a hand??
In your test, add a ChannelInterceptor to the submitApplicationChannel before calling the gateway.
public interface ChannelInterceptor {
/**
* Invoked before the Message is actually sent to the channel.
* This allows for modification of the Message if necessary.
* If this method returns {#code null} then the actual
* send invocation will not occur.
*/
#Nullable
default Message<?> preSend(Message<?> message, MessageChannel channel) {
return message;
}
/**
* Invoked immediately after the send invocation. The boolean
* value argument represents the return value of that invocation.
*/
default void postSend(Message<?> message, MessageChannel channel, boolean sent) {
}
/**
* Invoked after the completion of a send regardless of any exception that
* have been raised thus allowing for proper resource cleanup.
* <p>Note that this will be invoked only if {#link #preSend} successfully
* completed and returned a Message, i.e. it did not return {#code null}.
* #since 4.1
*/
default void afterSendCompletion(
Message<?> message, MessageChannel channel, boolean sent, #Nullable Exception ex) {
}
/**
* Invoked as soon as receive is called and before a Message is
* actually retrieved. If the return value is 'false', then no
* Message will be retrieved. This only applies to PollableChannels.
*/
default boolean preReceive(MessageChannel channel) {
return true;
}
/**
* Invoked immediately after a Message has been retrieved but before
* it is returned to the caller. The Message may be modified if
* necessary; {#code null} aborts further interceptor invocations.
* This only applies to PollableChannels.
*/
#Nullable
default Message<?> postReceive(Message<?> message, MessageChannel channel) {
return message;
}
/**
* Invoked after the completion of a receive regardless of any exception that
* have been raised thus allowing for proper resource cleanup.
* <p>Note that this will be invoked only if {#link #preReceive} successfully
* completed and returned {#code true}.
* #since 4.1
*/
default void afterReceiveCompletion(#Nullable Message<?> message, MessageChannel channel,
#Nullable Exception ex) {
}
}