How to see the types that flows in Spring Integration's IntegrationFlow - spring

I try to understand what's the type that returns when I aggregate in Spring Integration and that's pretty hard. I'm using Project Reactor and my code snippet is:
public FluxAggregatorMessageHandler randomIdsBatchAggregator() {
FluxAggregatorMessageHandler f = new FluxAggregatorMessageHandler();
f.setWindowTimespan(Duration.ofSeconds(5));
f.setCombineFunction(messageFlux -> messageFlux
.map(Message::getPayload)
.collectList()
.map(GenericMessage::new);
return f;
}
#Bean
public IntegrationFlow dataPipeline() {
return IntegrationFlows.from(somePublisher)
// ----> The type Message<?> passed? Or Flux<Message<?>>?
.handle(randomIdsBatchAggregator())
// ----> What type has been returned from the aggregation?
.handle(bla())
.get();
}
More than understanding the types that passes in the example, I want to know in general how can I know what are the objects that flows in the IntegrationFlow and their types.

IntegrationFlows.from(somePublisher)
This creates a FluxMessageChannel internally which subscribes to the provided Publsiher. Every single event is emitted from this channel to its subscriber - your aggregator.
The FluxAggregatorMessageHandler produces whatever is explained in the setCombineFunction() JavaDocs:
/**
* Configure a transformation {#link Function} to apply for a {#link Flux} window to emit.
* Requires a {#link Mono} result with a {#link Message} as value as a combination result
* of the incoming {#link Flux} for window.
* By default a {#link Flux} for window is fully wrapped into a message with headers copied
* from the first message in window. Such a {#link Flux} in the payload has to be subscribed
* and consumed downstream.
* #param combineFunction the {#link Function} to use for result windows transformation.
*/
public void setCombineFunction(Function<Flux<Message<?>>, Mono<Message<?>>> combineFunction) {
So, it is a Mono with a message which you really do with your .collectList(). That Mono is subscribed by the framework when it emits a reply message from the FluxAggregatorMessageHandler. Therefore your .handle(bla()) must expect a list of payloads. Which is really natural for the aggregator result.
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#flux-aggregator

Related

#KafkaListener per specific header value

I have #KafkaListener:
#KafkaListener(topicPattern = "SameTopic")
public void onMessage(Message<String> message, Acknowledgment acknowledgment) {
String eventType = new String((byte[]) message.getHeaders().get("Event-Type"), StandardCharsets.UTF_8);
switch (eventType) {
case "create" -> doCreate(message);
case "update" -> doUpdate(message);
case "delete" -> doDelete(message);
}
}
Producer sets custom header Event-Type with three possible values: create, update, delete. Currently I'm reading this header value from Message and then invoke rest of the logic according to the header value.
Is there any way to create three #KafkaListeners where each of them will consume message filtered by some criteria - for my case filtered by header Event-Type value?
#KafkaListener(topicPattern = "SameTopic", ...)
public void onCreate(Message<String> message, Acknowledgment acknowledgment) {
doCreate(message);
}
#KafkaListener(topicPattern = "SameTopic", ...)
public void onUpdate(Message<String> message, Acknowledgment acknowledgment) {
doUpdate(message);
}
#KafkaListener(topicPattern = "SameTopic", ...)
public void onDelete(Message<String> message, Acknowledgment acknowledgment) {
doDelete(message);
}
I'm aware of RecordFilterStrategy, but couldn't get any help of it.
Consider to have those types mapped to the partition on the topic.
This way you definitely can have different #KafkaListener with the specific partition assigned:
/**
* The topicPartitions for this listener when using manual topic/partition
* assignment.
* <p>
* Mutually exclusive with {#link #topicPattern()} and {#link #topics()}.
* #return the topic names or expressions (SpEL) to listen to.
*/
TopicPartition[] topicPartitions() default {};
The doc is here: https://docs.spring.io/spring-kafka/docs/current/reference/html/#manual-assignment
It's probably not going to work well with several instances of your app, since with manual assignment there is no consumer group involved. You may consider to refine the logic to 3 different topics. Or if that is not possible from produce side, use Kafka Streams to split() the original topic to other topics according the record key.

Which AmqpEvent or AmqpException to handle when an exclusive consumer fails

I have two instances of the same application, running in different virtual machines. I want to grant exclusive access to a queue for the consumer of one of them, while invalidating the local cache that is used by the consumer on the other.
Now, I have figured out that I need to handle ListenerContainerConsumerFailedEvent but I am guessing that implementing an ApplicationListener for this event is not going to ensure that I am receiving this event because of an exclusive consumer exception. I might want to check the Throwable of the event, or event further checks.
Which subclass of AmqpException or what further checks should I perform to ensure that the exception is received due to exclusive consumer access?
The logic in the listener container implementations is like this:
if (e.getCause() instanceof ShutdownSignalException
&& e.getCause().getMessage().contains("in exclusive use")) {
getExclusiveConsumerExceptionLogger().log(logger,
"Exclusive consumer failure", e.getCause());
publishConsumerFailedEvent("Consumer raised exception, attempting restart", false, e);
}
So, we indeed raise a ListenerContainerConsumerFailedEvent event and you can trace the cause message like we do in the framework, but on the other hand you can just inject your own ConditionalExceptionLogger:
/**
* Set a {#link ConditionalExceptionLogger} for logging exclusive consumer failures. The
* default is to log such failures at WARN level.
* #param exclusiveConsumerExceptionLogger the conditional exception logger.
* #since 1.5
*/
public void setExclusiveConsumerExceptionLogger(ConditionalExceptionLogger exclusiveConsumerExceptionLogger) {
and catch such an exclusive situation over there.
Also you can consider to use RabbitUtils.isExclusiveUseChannelClose(cause) in your code:
/**
* Return true if the {#link ShutdownSignalException} reason is AMQP.Channel.Close
* and the operation that failed was basicConsumer and the failure text contains
* "exclusive".
* #param sig the exception.
* #return true if the declaration failed because of an exclusive queue.
*/
public static boolean isExclusiveUseChannelClose(ShutdownSignalException sig) {

Can a single Spring's KafkaConsumer listener listens to multiple topic?

Anyone know if a single listener can listens to multiple topic like below? I know just "topic1" works, what if I want to add additional topics? Can you please show example for both below? Thanks for the help!
#KafkaListener(topics = "topic1,topic2")
public void listen(ConsumerRecord<?, ?> record, Acknowledgment ack) {
System.out.println(record);
}
or
ContainerProperties containerProps = new ContainerProperties(new TopicPartitionInitialOffset("topic1, topic2", 0));
Yes, just follow the #KafkaListener JavaDocs:
/**
* The topics for this listener.
* The entries can be 'topic name', 'property-placeholder keys' or 'expressions'.
* Expression must be resolved to the topic name.
* Mutually exclusive with {#link #topicPattern()} and {#link #topicPartitions()}.
* #return the topic names or expressions (SpEL) to listen to.
*/
String[] topics() default {};
/**
* The topic pattern for this listener.
* The entries can be 'topic name', 'property-placeholder keys' or 'expressions'.
* Expression must be resolved to the topic pattern.
* Mutually exclusive with {#link #topics()} and {#link #topicPartitions()}.
* #return the topic pattern or expression (SpEL).
*/
String topicPattern() default "";
/**
* The topicPartitions for this listener.
* Mutually exclusive with {#link #topicPattern()} and {#link #topics()}.
* #return the topic names or expressions (SpEL) to listen to.
*/
TopicPartition[] topicPartitions() default {};
So, your use-case should be like:
#KafkaListener(topics = {"topic1" , "topic2"})
If we have to fetch multiple topics from the application.properties file :
#KafkaListener(topics = { "${spring.kafka.topic1}", "${spring.kafka.topic2}" })

JmsMessageDrivenChannelAdapter start phase finishing observation [duplicate]

I have an integration test for my Spring Integration config, which consumes messages from a JMS topic with durable subscription. For testing, I am using ActiveMQ instead of Tibco EMS.
The issue I have is that I have to delay sending the first message to the endpoint using a sleep call at the beginning of our test method. Otherwise the message is dropped.
If I remove the setting for durable subscription and selector, then the first message can be sent right away without delay.
I'd like to get rid of the sleep, which is unreliable. Is there a way to check if the endpoint is completely setup before I send the message?
Below is the configuration.
Thanks for your help!
<int-jms:message-driven-channel-adapter
id="myConsumer" connection-factory="myCachedConnectionFactory"
destination="myTopic" channel="myChannel" error-channel="errorChannel"
pub-sub-domain="true" subscription-durable="true"
durable-subscription-name="testDurable"
selector="..."
transaction-manager="emsTransactionManager" auto-startup="false"/>
If you are using a clean embedded activemq for the test, the durability of the subscription is irrelevant until the subscription is established. So you have no choice but to wait until that happens.
You could avoid the sleep by sending a series of startup messages and only start the real test when the last one is received.
EDIT
I forgot that there is a methodisRegisteredWithDestination() on the DefaultMessageListenerContainer.
Javadocs...
/**
* Return whether at least one consumer has entered a fixed registration with the
* target destination. This is particularly interesting for the pub-sub case where
* it might be important to have an actual consumer registered that is guaranteed
* not to miss any messages that are just about to be published.
* <p>This method may be polled after a {#link #start()} call, until asynchronous
* registration of consumers has happened which is when the method will start returning
* {#code true} – provided that the listener container ever actually establishes
* a fixed registration. It will then keep returning {#code true} until shutdown,
* since the container will hold on to at least one consumer registration thereafter.
* <p>Note that a listener container is not bound to having a fixed registration in
* the first place. It may also keep recreating consumers for every invoker execution.
* This particularly depends on the {#link #setCacheLevel cache level} setting:
* only {#link #CACHE_CONSUMER} will lead to a fixed registration.
*/
We use it in some channel tests, where we get the container using reflection and then poll the method until we are subscribed to the topic.
/**
* Blocks until the listener container has subscribed; if the container does not support
* this test, or the caching mode is incompatible, true is returned. Otherwise blocks
* until timeout milliseconds have passed, or the consumer has registered.
* #see DefaultMessageListenerContainer#isRegisteredWithDestination()
* #param timeout Timeout in milliseconds.
* #return True if a subscriber has connected or the container/attributes does not support
* the test. False if a valid container does not have a registered consumer within
* timeout milliseconds.
*/
private static boolean waitUntilRegisteredWithDestination(SubscribableJmsChannel channel, long timeout) {
AbstractMessageListenerContainer container =
(AbstractMessageListenerContainer) new DirectFieldAccessor(channel).getPropertyValue("container");
if (container instanceof DefaultMessageListenerContainer) {
DefaultMessageListenerContainer listenerContainer =
(DefaultMessageListenerContainer) container;
if (listenerContainer.getCacheLevel() != DefaultMessageListenerContainer.CACHE_CONSUMER) {
return true;
}
while (timeout > 0) {
if (listenerContainer.isRegisteredWithDestination()) {
return true;
}
try {
Thread.sleep(100);
} catch (InterruptedException e) { }
timeout -= 100;
}
return false;
}
return true;
}

What is the difference between Rx.Observable subscribe and forEach

After creating an Observable like so
var source = Rx.Observable.create(function(observer) {...});
What is the difference between subscribe
source.subscribe(function(x) {});
and forEach
source.forEach(function(x) {});
In the ES7 spec, which RxJS 5.0 follows (but RxJS 4.0 does not), the two are NOT the same.
subscribe
public subscribe(observerOrNext: Observer | Function, error: Function, complete: Function): Subscription
Observable.subscribe is where you will do most of your true Observable handling. It returns a subscription token, which you can use to cancel your subscription. This is important when you do not know the duration of the events/sequence you have subscribed to, or if you may need to stop listening before a known duration.
forEach
public forEach(next: Function, PromiseCtor?: PromiseConstructor): Promise
Observable.forEach returns a promise that will either resolve or reject when the Observable completes or errors. It is intended to clarify situations where you are processing an observable sequence of bounded/finite duration in a more 'synchronous' manner, such as collating all the incoming values and then presenting once, by handling the promise.
Effectively, you can act on each value, as well as error and completion events either way. So the most significant functional difference is the inability to cancel a promise.
I just review the latest code available, technically the code of foreach is actually calling subscribe in RxScala, RxJS, and RxJava. It doesn't seems a big different. They now have a return type allowing user to have an way for stopping a subscription or similar.
When I work on the RxJava earlier version, the subscribe has a subscription return, and forEach is just a void. Which you may see some different answer due to the changes.
/**
* Subscribes to the [[Observable]] and receives notifications for each element.
*
* Alias to `subscribe(T => Unit)`.
*
* $noDefaultScheduler
*
* #param onNext function to execute for each item.
* #throws java.lang.IllegalArgumentException if `onNext` is null
* #throws rx.exceptions.OnErrorNotImplementedException if the [[Observable]] tries to call `onError`
* #since 0.19
* #see ReactiveX operators documentation: Subscribe
*/
def foreach(onNext: T => Unit): Unit = {
asJavaObservable.subscribe(onNext)
}
def subscribe(onNext: T => Unit): Subscription = {
asJavaObservable.subscribe(scalaFunction1ProducingUnitToAction1(onNext))
}
/**
* Subscribes an o to the observable sequence.
* #param {Mixed} [oOrOnNext] The object that is to receive notifications or an action to invoke for each element in the observable sequence.
* #param {Function} [onError] Action to invoke upon exceptional termination of the observable sequence.
* #param {Function} [onCompleted] Action to invoke upon graceful termination of the observable sequence.
* #returns {Disposable} A disposable handling the subscriptions and unsubscriptions.
*/
observableProto.subscribe = observableProto.forEach = function (oOrOnNext, onError, onCompleted) {
return this._subscribe(typeof oOrOnNext === 'object' ?
oOrOnNext :
observerCreate(oOrOnNext, onError, onCompleted));
};
/**
* Subscribes to the {#link Observable} and receives notifications for each element.
* <p>
* Alias to {#link #subscribe(Action1)}
* <dl>
* <dt><b>Scheduler:</b></dt>
* <dd>{#code forEach} does not operate by default on a particular {#link Scheduler}.</dd>
* </dl>
*
* #param onNext
* {#link Action1} to execute for each item.
* #throws IllegalArgumentException
* if {#code onNext} is null
* #throws OnErrorNotImplementedException
* if the Observable calls {#code onError}
* #see ReactiveX operators documentation: Subscribe
*/
public final void forEach(final Action1<? super T> onNext) {
subscribe(onNext);
}
public final Disposable forEach(Consumer<? super T> onNext) {
return subscribe(onNext);
}

Resources