Correlate messages between 2 JMS queues using Spring integration components - jms

I have 2 JMS queues and my application subscribes to both of them with Jms.messageDrivenChannelAdapter(...) component.
First queue receives messages of type Paid. Second queue receives messages of type Reversal.
Business scenario defines correlation between messages of type Paid and type Reversal.
Reversal should wait for Paid in order to be processed.
How can I achieve such "wait" pattern with Spring Integration?
Is it possible to correlate messages between 2 JMS queues?

See the documentation about the Aggregator.
The aggregator correlates messages using some correlation strategy and releases the group based on some release strategy.
The Aggregator combines a group of related messages, by correlating and storing them, until the group is deemed to be complete. At that point, the aggregator creates a single message by processing the whole group and sends the aggregated message as output.
The output payload is a list of the grouped message payloads by default, but you can provide a custom output processor.
EDIT
#SpringBootApplication
public class So55299268Application {
public static void main(String[] args) {
SpringApplication.run(So55299268Application.class, args);
}
#Bean
public IntegrationFlow in1(ConnectionFactory connectionFactory) {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory)
.destination("queue1"))
.channel("aggregator.input")
.get();
}
#Bean
public IntegrationFlow in2(ConnectionFactory connectionFactory) {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(connectionFactory)
.destination("queue2"))
.channel("aggregator.input")
.get();
}
#Bean
public IntegrationFlow aggregator() {
return f -> f
.aggregate(a -> a
.correlationExpression("headers.jms_correlationId")
.releaseExpression("size() == 2")
.expireGroupsUponCompletion(true)
.expireGroupsUponTimeout(true)
.groupTimeout(5_000L)
.discardChannel("discards.input"))
.handle(System.out::println);
}
#Bean
public IntegrationFlow discards() {
return f -> f.handle((p, h) -> {
System.out.println("Aggregation timed out for " + p);
return null;
});
}
#Bean
public ApplicationRunner runner(JmsTemplate template) {
return args -> {
send(template, "one", "two");
send(template, "three", null);
};
}
private void send(JmsTemplate template, String one, String two) {
template.convertAndSend("queue1", one, m -> {
m.setJMSCorrelationID(one);
return m;
});
if (two != null) {
template.convertAndSend("queue2", two, m -> {
m.setJMSCorrelationID(one);
return m;
});
}
}
}
and
GenericMessage [payload=[two, one], headers={jms_redelivered=false, jms_destination=queue://queue1, jms_correlationId=one, id=784535fe-8861-1b22-2cfa-cc2e67763674, priority=4, jms_timestamp=1553290921442, jms_messageId=ID:Gollum2.local-55540-1553290921241-4:1:3:1:1, timestamp=1553290921457}]
2019-03-22 17:42:06.460 INFO 55396 --- [ask-scheduler-1] o.s.i.a.AggregatingMessageHandler : Expiring MessageGroup with correlationKey[three]
Aggregation timed out for three

Related

Producer callback in Spring Cloud Stream with reactor core publisher

I have written a spring cloud stream application where producers are publishing messages to the designated kafka topics. My query is how can I add a producer callback to receive ack/confirmation that the message has been successfully published on the topic? Like how we do in spring kafka producer.send(record, new callback { ... }) (maintaining async producer). Below is my code:
private final Sinks.Many<Message<?>> responseProcessor = Sinks.many().multicast().onBackpressureBuffer();
#Bean
public Supplier<Flux<Message<?>>> event() {
return responseProcessor::asFlux;
}
public Message<?> publishEvent(String status) {
try {
String key = ...;
response = MessageBuilder.withPayload(payload)
.setHeader(KafkaHeaders.MESSAGE_KEY, key)
.build();
responseProcessor.tryEmitNext(response);
}
How can I make sure that tryEmitNext has successfully written to the topic?
Is implementing ProducerListener a solution and possible? Couldn't find a concrete solution/documentation in Spring Cloud Stream
UPDATE
I have implemented below now, seems to work as expected
#Component
public class MyProducerListener<K, V> implements ProducerListener<K, V> {
#Override
public void onSuccess(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata) {
// Do nothing on onSuccess
}
#Override
public void onError(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata, Exception exception) {
log.error("Producer exception occurred while publishing message : {}, exception : {}", producerRecord, exception);
}
}
#Bean
ProducerMessageHandlerCustomizer<KafkaProducerMessageHandler<?, ?>> customizer(MyProducerListener pl) {
return (handler, destinationName) -> handler.getKafkaTemplate().setProducerListener(pl);
}
See the Kafka Producer Properties.
recordMetadataChannel
The bean name of a MessageChannel to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header KafkaHeaders.RECORD_METADATA. The header contains a RecordMetadata object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)
Failed sends go the producer error channel (if configured); see Error Channels. Default: null
You can add a #ServiceActivator to consume from this channel asynchronously.

Use Function to replyTo RPC request

I would like to use the java.util.Function approach to reply to an request send via RabbitTemplate.convertSendAndReceive. It's working fine with the RabbitListener but I can not get it working with the functional approach.
Client (working)
class Client(private val template RabbitTemplate) {
fun send() = template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message"
)
}
Server (approach 1, working)
class Server {
#RabbitListener(queues = ["rpc-queue"])
fun receiveRequest(message: String) = "Response Message"
#Bean
fun queue(): Queue {
return Queue("rpc-queue")
}
#Bean
fun exchange(): DirectExchange {
return DirectExchange("rpc-exchange")
}
#Bean
fun binding(exchange: DirectExchange, queue: Queue): Binding {
return BindingBuilder.bind(queue).to(exchange).with("rpc-routing-key")
}
}
Server (approach 2, not working) --> goal
class Server {
#Bean
fun receiveRequest(): Function<String, String> {
return Function { value: String ->
"Response Message"
}
}
}
With the config (approach 2)
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.binding.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.binding.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
With approach 2 the server receives. Unfortunately the response is lost. Does anybody know how to use the RPC pattern with the functional approach? I don't want to use the RabbitListener.
See documentation/tutorial.
Spring Cloud Stream is not really designed for RPC on the server side, so it won't handle this automatically like #RabbitListener does.
You can, however, achieve it by adding an output binding to route the reply to the default exchange and the replyTo header:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression=headers['amqp_replyTo']
#logging.level.org.springframework.amqp=debug
#SpringBootApplication
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
PAYLOAD MESSAGE
Note that the reply will come as a byte[]; you can use a custom message converter on the template to convert to String.
EDIT
In reply to the third comment below.
The RabbitTemplate uses direct reply-to by default, so the reply address is not a real queue, it is a pseudo queue created by the binder and associated with a consumer in the template.
You can also configure the template to use temporary reply queues, but they are also routed to by the default exchange "".
You can, however, configure an external reply container, with the template as the listener.
You can then route back using whatever exchange and routing key you want.
Putting it all together:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=reply-exchange
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression='reply-routing-key'
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.declare-exchange=false
spring.rabbitmq.template.reply-timeout=10000
#logging.level.org.springframework.amqp=debug
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
SimpleMessageListenerContainer replyContainer(SimpleRabbitListenerContainerFactory factory,
RabbitTemplate template) {
template.setReplyAddress("reply-queue");
SimpleMessageListenerContainer container = factory.createListenerContainer();
container.setQueueNames("reply-queue");
container.setMessageListener(template);
return container;
}
#Bean
public ApplicationRunner runner(RabbitTemplate template, SimpleMessageListenerContainer replyContainer) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
IMPORTANT: if you have multiple instances of the client side, each needs its own reply queue.
In that case, the routing key must be the queue name and you should revert to the previous example to set the routing key expression (to get the queue name from the header).

How to send keyed message to Kafka using Spring Cloud Stream Supplier

I want to use Spring Cloud Stream to produce keyed (message with specific key) messages to Kafka.
#SpringBootApplication
public class SpringCloudStreamKafkaApplication {
public static void main(String[] args) {
SpringApplication.run(SpringCloudStreamKafkaApplication.class, args);
}
#Bean
Supplier<DataRecord> process(){
return () -> new DataRecord(42L);
}
}
What do I need to change in the Supplier code to provide key?
Is it possible in new style of API (using lambdas)?
Thank you
Return a Message<?> and set the KafkaHeaders.MESSAGE_KEY header:
#Bean
Supplier<Message<String>> process() {
return () -> MessageBuilder.withPayload("foo")
.setHeader(KafkaHeaders.MESSAGE_KEY, "bar".getBytes())
.build();
}
(assumes the default key serializer (byte[]).
EDIT
This will be called endlessly.
If you want to send a finite stream, I believe you have to switch to the reactive model.
#Bean
Supplier<Flux<Message<String>>> processFinite() {
Message<String> msg1 = MessageBuilder.withPayload("foo")
.setHeader(KafkaHeaders.MESSAGE_KEY, "bar".getBytes())
.build();
Message<String> msg2 = MessageBuilder.withPayload("baz")
.setHeader(KafkaHeaders.MESSAGE_KEY, "qux".getBytes())
.build();
return () -> {
return Flux.just(msg1, msg2);
};
}
There is also Flux.fromStream(myStream).
Which will end at the end of the stream.
EDIT2
You can also use the StreamBridge.
https://docs.spring.io/spring-cloud-stream/docs/3.1.4/reference/html/spring-cloud-stream.html#_sending_arbitrary_data_to_an_output_e_g_foreign_event_driven_sources

How to dead letter a RabbitMQ messages when an exceptions happens in a service after an aggregator's forceRelease

I am trying to figure out the best way to handle errors that might have occurred in a service that is called after a aggregate's group timeout occurred that mimics the same flow as if the releaseExpression was met.
Here is my setup:
I have a AmqpInboundChannelAdapter that takes in messages and send them to my aggregator.
When the releaseExpression has been met and before the groupTimeout has expired, if an exception gets thrown in my ServiceActivator, the messages get sent to my dead letter queue for all the messages in that MessageGroup. (10 messages in my example below, which is only used for illustrative purposes) This is what I would expect.
If my releaseExpression hasn't been met but the groupTimeout has been met and the group times out, if an exception gets throw in my ServiceActivator, then the messages do not get sent to my dead letter queue and are acked.
After reading another blog post,
link1
it mentions that this happens because the processing happens in another thread by the MessageGroupStoreReaper and not the one that the SimpleMessageListenerContainer was on. Once processing moves away from the SimpleMessageListener's thread, the messages will be auto ack.
I added the configuration mentioned in the link above and see the error messages getting sent to my error handler. My main question, is what is considered the best way to handle this scenario to minimize message getting lost.
Here are the options I was exploring:
Use a BatchRabbitTemplate in my custom error handler to publish the failed messaged to the same dead letter queue that they would have gone to if the releaseExpression was met. (This is the approach I outlined below but I am worried about messages getting lost, if an error happens during publishing)
Investigate if there is away I could let the SimpleMessageListener know about the error that occurred and have it send the batch of messages that failed to a dead letter queue? I doubt this is possible since it seems the messages are already acked.
Don't set the SimpleMessageListenerContainer to AcknowledgeMode.AUTO and manually ack the messages when they get processed via the Service when the releaseExpression being met or the groupTimeOut happening. (This seems kinda of messy, since there can be 1..N message in the MessageGroup but wanted to see what others have done)
Ideally, I want to have a flow that will that will mimic the same flow when the releaseExpression has been met, so that the messages don't get lost.
Does anyone have recommendation on the best way to handle this scenario they have used in the past?
Thanks for any help and/or advice!
Here is my current configuration using Spring Integration DSL
#Bean
public SimpleMessageListenerContainer workListenerContainer() {
SimpleMessageListenerContainer container =
new SimpleMessageListenerContainer(rabbitConnectionFactory);
container.setQueues(worksQueue());
container.setConcurrentConsumers(4);
container.setDefaultRequeueRejected(false);
container.setTransactionManager(transactionManager);
container.setChannelTransacted(true);
container.setTxSize(10);
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
return container;
}
#Bean
public AmqpInboundChannelAdapter inboundRabbitMessages() {
AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(workListenerContainer());
return adapter;
}
I have defined a error channel and defined my own taskScheduler to use for the MessageStoreRepear
#Bean
public ThreadPoolTaskScheduler taskScheduler(){
ThreadPoolTaskScheduler ts = new ThreadPoolTaskScheduler();
MessagePublishingErrorHandler mpe = new MessagePublishingErrorHandler();
mpe.setDefaultErrorChannel(myErrorChannel());
ts.setErrorHandler(mpe);
return ts;
}
#Bean
public PollableChannel myErrorChannel() {
return new QueueChannel();
}
public IntegrationFlow aggregationFlow() {
return IntegrationFlows.from(inboundRabbitMessages())
.transform(Transformers.fromJson(SomeObject.class))
.aggregate(a->{
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
}
)
.handle("someService", "processMessages")
.get();
}
Here is my custom error flow
#Bean
public IntegrationFlow errorResponse() {
return IntegrationFlows.from("myErrorChannel")
.<MessagingException, Message<?>>transform(MessagingException::getFailedMessage,
e -> e.poller(p -> p.fixedDelay(100)))
.channel("myErrorChannelHandler")
.handle("myErrorHandler","handleFailedMessage")
.log()
.get();
}
Here is the custom error handler
#Component
public class MyErrorHandler {
#Autowired
BatchingRabbitTemplate batchingRabbitTemplate;
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(Message<?> message) {
ArrayList<SomeObject> payload = (ArrayList<SomeObject>)message.getPayload();
payload.forEach(m->batchingRabbitTemplate.convertAndSend("some.dlq","#", m));
}
}
Here is the BatchingRabbitTemplate bean
#Bean
public BatchingRabbitTemplate batchingRabbitTemplate() {
ThreadPoolTaskScheduler scheduler = new ThreadPoolTaskScheduler();
scheduler.setPoolSize(5);
scheduler.initialize();
BatchingStrategy batchingStrategy = new SimpleBatchingStrategy(10, Integer.MAX_VALUE, 30000);
BatchingRabbitTemplate batchingRabbitTemplate = new BatchingRabbitTemplate(batchingStrategy, scheduler);
batchingRabbitTemplate.setConnectionFactory(rabbitConnectionFactory);
return batchingRabbitTemplate;
}
Update 1) to show custom MessageGroupProcessor:
public class CustomAggregtingMessageGroupProcessor extends AbstractAggregatingMessageGroupProcessor {
#Override
protected final Object aggregatePayloads(MessageGroup group, Map<String, Object> headers) {
return group;
}
}
Example Service:
#Slf4j
public class SomeService {
#ServiceActivator
public void processMessages(MessageGroup messageGroup) throws IOException {
Collection<Message<?>> messages = messageGroup.getMessages();
//Do business logic
//ack messages in the group
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug(" deliveryTag = {}",deliveryTag);
log.debug("Channel = {}",channel);
channel.basicAck(deliveryTag, false);
}
}
}
Updated integrationFlow
public IntegrationFlow aggregationFlowWithCustomMessageProcessor() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
New ErrorHandler to do nack
public class MyErrorHandler {
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(MessageGroup messageGroup) throws IOException {
if(messageGroup!=null) {
log.debug("Nack messages size = {}", messageGroup.getMessages().size());
Collection<Message<?>> messages = messageGroup.getMessages();
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug("deliveryTag = {}",deliveryTag);
log.debug("channel = {}",channel);
channel.basicNack(deliveryTag, false, false);
}
}
}
}
Update 2 Added custom ReleaseStratgedy and change to aggegator
public class CustomMeasureGroupReleaseStratgedy implements ReleaseStrategy {
private static final int MAX_MESSAGE_COUNT = 10;
public boolean canRelease(MessageGroup messageGroup) {
return messageGroup.getMessages().size() >= MAX_MESSAGE_COUNT;
}
}
public IntegrationFlow aggregationFlowWithCustomMessageProcessorAndReleaseStratgedy() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.transactional(true);
a.releaseStrategy(new CustomMeasureGroupReleaseStratgedy());
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
There are some flaws in your understanding.If you use AUTO, only the last message will be dead-lettered when an exception occurs. Messages successfully deposited in the group, before the release, will be ack'd immediately.
The only way to achieve what you want is to use MANUAL acks.
There is no way to "tell the listener container to send messages to the DLQ". The container never sends messages to the DLQ, it rejects a message and the broker sends it to the DLX/DLQ.

Right way to split, enrich items then send each item to another channel?

Is this the right way to split a list of items, enrich each item and then send each of those enriched items to another channel?
It seems like even though each item is being enriched only the last one is sent to the output channel...
Here is the snipper from my test where I see from the flow for only page2 being invoked.
this.sitePackage = new Package();
this.sitePackage.add(page1);
this.sitePackage.add(page2);
this.sitePackage.add(page3);
//Publish using gateway
this.publishingService.publish(sitePackage);
If I do this however...
this.sitePackage.add(page1);
this.sitePackage.add(page1);
this.sitePackage.add(page2);
this.sitePackage.add(page2);
this.sitePackage.add(page3);
this.sitePackage.add(page3);
I see all the pages being published but the last one is page2 not page3 (even though from debugging I can see the instance has page 3 properties).
It seems like every other item is being seen by the flows...
My flows go like this...
Starting with the PublishPackage flow. This is the main entry flow and intended to split the items out of the package and send each of them, after enriching the payload, to flows who are attached to the publishPackageItem channel...
#Bean
IntegrationFlow flowPublishPackage()
{
return flow -> flow
.channel(this.publishPackageChannel())
.<Package>handle((p, h) -> this.savePackage(p))
.split(Package.class, this::splitPackage)
.channel(this.publishPackageItemChannel());
}
#Bean
#PublishPackageChannel
MessageChannel publishPackageChannel()
{
return MessageChannels.direct().get();
}
#Bean
#PublishPackageItemChannel
MessageChannel publishPackageItemChannel()
{
return MessageChannels.direct().get();
}
#Splitter
List<PackageEntry> splitPackage(final Package bundle)
{
final List<PackageEntry> enrichedEntries = new ArrayList<>();
for (final PackageEntry entry : bundle.getItems())
{
enrichedEntries.add(entry);
}
return enrichedEntries;
}
#Bean
GatewayProxyFactoryBean publishingGateway()
{
final GatewayProxyFactoryBean proxy = new GatewayProxyFactoryBean(PublishingService.class);
proxy.setBeanFactory(this.beanFactory);
proxy.setDefaultRequestChannel(this.publishPackageChannel());
proxy.setDefaultReplyChannel(this.publishPackageChannel());
proxy.afterPropertiesSet();
return proxy;
}
Next, the CMS publish flows are attached to the publishPackageItem channel and based on the type after splitting, routed to a specific element channel for handling. After splitting the page only specific element types may have a subscribing flow.
#Inject
public CmsPublishFlow(#PublishPackageItemChannel final MessageChannel channelPublishPackageItem)
{
this.channelPublishPackageItem = channelPublishPackageItem;
}
#Bean
#PublishPageChannel
MessageChannel channelPublishPage()
{
return MessageChannels.direct().get();
}
#Bean
IntegrationFlow flowPublishContent()
{
return flow -> flow
.channel(this.channelPublishPackageItem)
.filter(PackageEntry.class, p -> p.getEntry() instanceof Page)
.transform(PackageEntry.class, PackageEntry::getEntry)
.split(Page.class, this::traversePageElements)
.<Content, String>route(Content::getType, mapping -> mapping
.resolutionRequired(false)
.subFlowMapping(PAGE, sf -> sf.channel(channelPublishPage()))
.subFlowMapping(IMAGE, sf -> sf.channel(channelPublishAsset()))
.defaultOutputToParentFlow());
//.channel(IntegrationContextUtils.NULL_CHANNEL_BEAN_NAME);
}
Finally, my goal is to subscribe to the channel and handle each element accordingly. I subscribe this flow to the channelPublishPage. Each subscriber may handle the element differently.
#Inject
#PublishPageChannel
private MessageChannel channelPublishPage;
#Bean
IntegrationFlow flowPublishPage()
{
return flow -> flow
.channel(this.channelPublishPage)
.publishSubscribeChannel(c -> c
.subscribe(s -> s
.<Page>handle((p, h) -> this
.generatePage(p))));
}
I somehow feel that the problem is here:
proxy.setDefaultRequestChannel(this.publishPackageChannel());
proxy.setDefaultReplyChannel(this.publishPackageChannel());
Consider do not use the same channel for requests and for waiting replies. This way you bring some loop and really unexpected behavior.

Resources