server sent event spring webFlux with reactor - spring

Could this be a correct way to dispatch A common topic information through browser client?
#RestController
public class GenvScriptHandler {
DirectProcessor<String> topicData = DirectProcessor.create();
FluxSink<String> sink;
int test;
#GetMapping(value = "/addTopic")
public void addTopic() {
if (sink == null) {
sink = topicData.sink();
}
sink.next(String.valueOf(test++));
}
#GetMapping(value = "/getTopic", produces = "text/event-stream")
public Flux<String> getTopic() {
Flux<String> autoConnect = topicData.publish().autoConnect();
return autoConnect;
}
}
As I Use a DirectProcessor there is no backpressure possible, I wonder how the flux is consumed when sending through sse. Does a subscriber can request less than the number element pushed in the flux?
http://projectreactor.io/docs/core/release/reference/#_directprocessor
As a consequence, a DirectProcessor signals an IllegalStateException to its subscribers if you push N elements through it but at least one of its subscribers has requested less than N.

Subscribing with a SSE request, does a
request(1) and not a request(Integer.MAX_VALUE)
So if I sink * 1000 times, the Processor OverLoad and an exception is thrown, even if it has subscribers:
reactor.core.Exceptions$OverflowException: Can't deliver value due to lack of requests
Safer to Use an EmitterProcessor or a ReplayProcessor in my case

Related

Difference between DirectChannel and FluxMessageChannel

I was reading about Spring Integration's FluxMessageChannel here and here, but I still don't understand exactly what are the differences between using a DirectChannel and FluxMessageChannel when using Project Reactor. Since the DirectChannel is stateless and controlled by its pollers, I'd expect the FluxMessageChannel to not be needed. I'm trying to understand when exactly should I use each and why, when speaking on Reactive Streams applications that are implemented with Spring Integration.
I currently have a reactive project that uses DirectChannel, and it seems to work fine, even the documentation says:
the flow behavior is changed from an imperative push model to a reactive pull model
I'd like to understand when to use each of the channels and what is the exact difference when working with Reactive Streams.
The DirectChannel does not have any poller and its implementation is very simple: as long as a message is sent to it, the handler is called. In the same caller's thread:
public class DirectChannel extends AbstractSubscribableChannel {
private final UnicastingDispatcher dispatcher = new UnicastingDispatcher();
private volatile Integer maxSubscribers;
/**
* Create a channel with default {#link RoundRobinLoadBalancingStrategy}.
*/
public DirectChannel() {
this(new RoundRobinLoadBalancingStrategy());
}
Where that UnicastingDispatcher is:
public final boolean dispatch(final Message<?> message) {
if (this.executor != null) {
Runnable task = createMessageHandlingTask(message);
this.executor.execute(task);
return true;
}
return this.doDispatch(message);
}
(There is no executor option for the DirectChannel)
private boolean doDispatch(Message<?> message) {
if (tryOptimizedDispatch(message)) {
return true;
}
...
protected boolean tryOptimizedDispatch(Message<?> message) {
MessageHandler handler = this.theOneHandler;
if (handler != null) {
try {
handler.handleMessage(message);
return true;
}
catch (Exception e) {
throw IntegrationUtils.wrapInDeliveryExceptionIfNecessary(message,
() -> "Dispatcher failed to deliver Message", e);
}
}
return false;
}
That's why I call it " imperative push model". The caller is this case is going to wait until the handler finishes its job. And if you have a big flow, everything is going to be stopped in the sender thread until a sent message has reached the end of the flow of direct channels. In two simple words: the publisher is in charge for the whole execution and it is blocked in this case. You haven't faced any problems with your solution based on the DirectChannel just because you didn't use reactive non-blocking threads yet like Netty in WebFlux or MongoDB reactive driver.
The FluxMessageChannel was really designed for Reactive Streams purposes where the subscriber is in charge for handling a message which it pulls from the Flux on demand. This way just after sending the publisher is free to do anything else. Just because it is already a subscriber responsibility to handle the message.
I would say it is definitely OK to use DirectChannel as long as your handlers are not blocking. As long as they are blocking you should go with FluxMessageChannel. Although don't forget that there are other channel types for different tasks: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations

Spring Integration - Priority aggregator

I have following app requirements:
Messages are received from RabbitMq, and then aggregated based on some more complex rules e.g. - based on types property (with pre-given type-time mapping) and based on existing time message has been waiting in the queue (old property)
All messages should be released at some variable message rate, e.g. 1msg/sec up to 100msg/sec. This rate is controlled and set by service that will monitor rabbitmq queue size (one queue that is not related to this component, and is further up the pipeline) and if too much messages are in queue - would decrease the rate.
As you can see in the image one use-case: three messages are already aggregated and are waiting to be released next second (since current rate is 1msg/sec), but just at that time, MSG arrives with id:10, and it updated AGGREGATED 2, making it become 1st message by priority. So on next tick, instead of releasing AGGREGATED 3, we release AGGREGATED 2 since it has now higher priority.
Now, the question is - can I use Spring Integration Aggregator for this, since I do not know if it supports prioritization of messages during aggregation? I know of groupTimeout, but that one is only adjusting single message group - not changing priority of other groups. Would it be possible to use MessageGroupStoreReaper that would adjust all other aggregated messages by priority when new MSG arrives?
UPDATE
I did some implementation like this - seems OK for now - it is aggregating messages as it arrives, and comparator is sorting messages by my custom logic.
Do you think there could be some problems with this (concurrency etc.)? I can see in the logs, that poller is invoked more than once on occations. Is this normal?
2021-01-18 13:52:05.277 INFO 16080 --- [ scheduling-1] ggregatorConfig$PriorityAggregatingQueue : POLL
2021-01-18 13:52:05.277 INFO 16080 --- [ scheduling-1] ggregatorConfig$PriorityAggregatingQueue : POLL
2021-01-18 13:52:05.277 INFO 16080 --- [ scheduling-1] ggregatorConfig$PriorityAggregatingQueue : POLL
2021-01-18 13:52:05.277 INFO 16080 --- [ scheduling-1] ggregatorConfig$PriorityAggregatingQueue : POLL
Also, is this commented doit method, proper way to increase max number of polled messages in runtime?
#Bean
public MessageChannel aggregatingChannel(){
return new QueueChannel(new PriorityAggregatingQueue<>((m1, m2) -> {//aggr here},
Comparator.comparingInt(x -> x),
(m) -> {
ExampleDTO d = (ExampleDTO) m.getPayload();
return d.getId();
}
));
}
class PriorityAggregatingQueue<K> extends AbstractQueue<Message<?>> {
private final Log logger = LogFactory.getLog(getClass());
private final BiFunction<Message<?>, Message<?>, Message<?>> accumulator;
private final Function<Message<?>, K> keyExtractor;
private final NavigableMap<K, Message<?>> keyToAggregatedMessage;
public PriorityAggregatingQueue(BiFunction<Message<?>, Message<?>, Message<?>> accumulator,
Comparator<? super K> comparator,
Function<Message<?>, K> keyExtractor) {
this.accumulator = accumulator;
this.keyExtractor = keyExtractor;
keyToAggregatedMessage = new ConcurrentSkipListMap<>(comparator);
}
#Override
public Iterator<Message<?>> iterator() {
return keyToAggregatedMessage.values().iterator();
}
#Override
public int size() {
return keyToAggregatedMessage.size();
}
#Override
public boolean offer(Message<?> m) {
logger.info("OFFER");
return keyToAggregatedMessage.compute(keyExtractor.apply(m), (k,old) -> accumulator.apply(old, m)) != null;
}
#Override
public Message<?> poll() {
logger.info("POLL");
Map.Entry<K, Message<?>> m = keyToAggregatedMessage.pollLastEntry();
return m != null ? m.getValue() : null;
}
#Override
public Message<?> peek() {
Map.Entry<K, Message<?>> m = keyToAggregatedMessage.lastEntry();
return m!= null ? m.getValue() : null;
}
}
// #Scheduled(fixedDelay = 10*1000)
// public void doit(){
// System.out.println("INCREASE POLL");
// pollerMetadata().setMaxMessagesPerPoll(pollerMetadata().getMaxMessagesPerPoll() * 2);
// }
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata pollerMetadata(){
PollerMetadata metadata = new PollerMetadata();
metadata.setTrigger(new DynamicPeriodicTrigger(Duration.ofSeconds(30)));
metadata.setMaxMessagesPerPoll(1);
return metadata;
}
#Bean
public IntegrationFlow aggregatingFlow(
AmqpInboundChannelAdapter aggregatorInboundChannel,
AmqpOutboundEndpoint aggregatorOutboundChannel,
MessageChannel wtChannel,
MessageChannel aggregatingChannel,
PollerMetadata pollerMetadata
) {
return IntegrationFlows.from(aggregatorInboundChannel)
.wireTap(wtChannel)
.channel(aggregatingChannel)
.handle(aggregatorOutboundChannel)
.get();
}
Well, if there is a new message for group to complete it arrives into an aggregator, then such a group is released immediately (if your ReleaseStrategy says that though). The rest of group under timeout will continue to wait for the schedule.
It is probably possible to come up with smart algorithm to rely on a single common schedule with the MessageGroupStoreReaper to decide if we need to release that partial group or just discard it. Again: the ReleaseStrategy should give us a clue to release or not, even if partial. When discard happens and we want to keep those messages in the aggregator, we need to resend them back to the aggregator after some delay. After expiration the group is removed from the store and this happens when we have already sent into a discard channel, so it is better to delay them and let an aggregator to clean up those groups, so after delay we can safely send them back to the aggregator for a new expiration period as parts of new groups.
You probably also can iterate all of the messages in the store after releases normal group to adjust some time key in their headers for the next expiration time.
I know this is hard matter, but there is really no any out-of-the-box solution since it was not designed to affect other groups from one we just dealt with...

Replay Kafka topic with Server-Sent-Events

I'm thinking about the following use-case and would like to validate if this approach is conceptually valid.
The goal is to expose a long-running Server-Sent-Event (SSE) endpoint in Spring replaying the same Kafka topic for each incoming connection (with some user-specific filtering).
The SSE is exposed in this way:
#GetMapping("/sse")
public SseEmitter sse() {
SseEmitter sseEmitter = new SseEmitter();
Executors
.newSingleThreadExecutor()
.execute(() -> dummyDataProducer.generate() // kafka ultimately
.forEach(payload -> {
try {
sseEmitter.send(payload);
} catch (IOException ex) {
sseEmitter.completeWithError(ex);
}
}));
return sseEmitter;
}
From the other side, there is a KafkaListener method (ConcurrentKafkaListenerContainerFactory is used) :
#KafkaListener(topics = "${app.kafka.topic1}")
public void receive(
#Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) Integer id,
#Payload Object payload) {
// do something ...
}
As far as I know, the Kafka consumer application uses one thread for reading data from a single topic. This somehow violates the idea of using SSE, where for each incoming connection a dedicated long-running thread is created.
Is it a valid approach for this use-case? If so, how to accomplish this properly?

How to limit the number of stomp clients in Spring, subscribing to a specific topic, based on a condition?

I have been researching for a way to limit the number of clients who can subscribe to a specific stomp topic but have not yet understood, which could be the right approach according to my needs.
My use case is a game, which I am developing in Angular (ng2-stompjs stomp client) and Spring Boot Websockets (for the moment, the Spring in-memory message broker is in use).
The idea is that a user can be connected and subscribed to a "/lobby" stomp topic, and there he sees the opened game rooms, that could be in different statuses. for example, in-play or not started yet due to the low number of players joined.
I'd like to intercept and programmatically restrict a possible subscription of a client, to a specific "/room/{roomId}" topic, IF the MAX number of players has been reached, for example, 4. There could also be some simple client-side validation to restrict that, but I believe only client-side is not sufficient
So my main questions are:
How can a specific stomp topic subscription be intercepted in Spring?
Is it possible to return to the client-requestor some kind of error message that subscription could not be done?
I'd really appreciate your help, thank you in advance!
You could implement a StompEventListener which listens for subscriptions, in this we can have map mapping a destination(room number) versus the count of number of players in that particular room. if the count is already at max reject the subscription.
#Service
class StompEventListener() {
private Map<String, int> roomIdVsPlayerCount = new HashMap<>();
#EventListener
public void handleSubscription(SessionSubscribe event) {
StompHeaderAccessor accessor = StompHeaderAccessor.wrap(event.getMessage());
String destination = accessor.getDestination();
String roomId = destination.substring(...); //Parsed RoomID
if(roomIdVsPlayerCount.get(roomId) == MAX_ALLOWED_PLAYERS) {
//Throw exception which will terminate that client connection
or, send an error message like so:
simpMessagingTemplate.convertAndSend(<some_error_message>);
return;
}
//So it is not at maximum do further logic to actually subscribe
user and
roomIdVsPlayerCount.get(roomId) += 1;
}
#EventListener
public void handleUnsubscription(SessionUnsubscribe event) {
...
}
}
Useful References:
SessionSubscribeEvent (For handling the subscriptions)
ConvertAndSend. (For sending the error messages to client.)
EDIT
Please try sending the exception from a channel Interceptor since the above did not send the exception , so that it gets propagated to the client. The map we defined earlier can be defined as a bean in a separate class accessible(with #Autowired) to both event handler(for incrementing and decrementing) and TopicSubscriptionInterceptor(for validation).
#Component
class TopicSubscriptionInterceptor implements ChannelInterceptor {
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel){
StompHeaderAccessor accessor = StompHeaderAccessor.wrap(message);
String destination = accessor.getDestination();
String roomId = destination.substring(...); //Parsed RoomID
if(roomIdVsPlayerCount.get(roomId) == MAX_ALLOWED_PLAYERS) {
//Throw exception which will terminate that client connection
}
//Since it is not at limit continue
}
}
Useful reference for implementing a TopicSubscriptionInterceptor: TopicSubscriptionInterceptor

Running Tasks in different thread in Spring Webflux Annotated controller

I have a spring Webflux Annotated controller as below,
#RestController
public class TestBlockingController {
Logger log = LoggerFactory.getLogger(this.getClass().getName());
#GetMapping()
public Mono<String> blockForXSeconds(#RequestParam("block-seconds") Integer blockSeconds) {
return getStringMono();
}
private Mono<String> getStringMono() {
Integer blockSeconds = 5;
String type = new String();
try {
if (blockSeconds % 2 == 0) {
Thread.sleep(blockSeconds * 1000);
type = "EVEN";
} else {
Thread.sleep(blockSeconds * 1000);
type = "ODD";
}
} catch (Exception e) {
log.info("Got Exception");
}
log.info("Type of block-seconds: " + blockSeconds);
return Mono.just(type);
}
}
How do I make getStringMono run in a different thread than Netty server threads. The problem I am facing is that as I am running in server thread I am getting basically less throughput (2 requests per second). How do I go about making running getStringMono in a separate thread.
You can use subscribeOn operator to delegate the task to a different threadpool:
Mono.defer(() -> getStringMono()).subscribeOn(Schedulers.elastic());
Although, you have to note that this type of blocking should be avoided in a reactive application at any cost. If possible, use a client which supports non-blocking IO and returns a promise type (Mono, CompletableFuture, etc.). If you just want to have an artificial delay, then use Mono.delay instead.
You can use Mono.defer() method.
The method signature is as:
public static <T> Mono<T> defer(Supplier<? extends Mono<? extends T>> supplier)
Your Rest API should look like this.
#GetMapping()
public Mono<String> blockForXSeconds(#RequestParam("block-seconds") Integer blockSeconds) {
return Mono.defer(() -> getStringMono());
}
The defer operator is there to make this source lazy, re-evaluating the content of the lambda each time there is a new subscriber. This will increase your API throughput.
Here you can view the detailed analysis.

Resources