How to control the number of subscribers for a publisher in java reactor core? - spring

I am creating a Flux publisher to get all objects for a collection using spring mongo reactive, I don't want to create so many subscribers which can crash the system by calling finalAll objects for mongo collection. I want to control the number of subscribers(suppose I need only 10 as active at time) for a flux publisher and once the queue is free then it should be able to subscribe.
I don't know what method of reactive core I should use or I should override the subscribe() method of Publisher class.

Your subscribers are not really the problem but the concurrent calls are. The solution would be to define a boundedElastic scheduler, whose thread pool is the maximum number of concurrent calls to your DB, and make sure all DB calls are executed on that scheduler.
private final Scheduler dbScheduler = Schedulers.newBoundedElastic(...);
public Flux<Object> findAll() {
return repository.findAll()
.subscribeOn(dbScheduler)
.publishOn(Schedulers.parallel());
}

Related

Is there a way to order execution of #KafkaListener methods?

Currently I have 2 #KafkaListener methods that consume events from two different topics. The problem is that I need to "make" one of them execute first, always, and then the second one. I tried #DependsOn. Thanks
Update: Description of my system is following
I have kafka topic Account and TransactionRequest. I have microservice called datagenerator which on start generates 10.000 accounts, and 1 transaction every 300ms that includes accountid, amount etc. The problem is that microservice validationService listens for Accounts and TransactionRequests, and in my #KafkaListener method called validate, i have method that validates incoming transactions. Another #KafkaListener listens for accounts and as soon as they start generating, this method insertAccounts inserts them into cassandra DB. The problem is, first kafkaListener validate tries to check amount on account in DB, but they havent been inserted because #KafkaListener responsible for that didnt fire yet.

Find time between mono runnable subscribe call and runnable actually being executed

I have a mono created from a runnable. I am using an ExecutorService with fixed thread size to create Scheduler instance. I am creating multiple Mono's using below code and subscribing to them.
Mono.fromRunnable(new Runnable() {
//Some business logic
}).subscribeOn(scheduler)
These subscriptions can happen parallel due to invocations from multiple calls and we are using a common ExecutorService for all these invocations, there could be possibility of lag between when it is subscribed and when "Some business logic" block mentioned below is actually executed due to limited thread size set for ExecutorService. Is there a way to find this time lag between when it is subscribed and when it actually got a thread to be executed?
There's no built-in way that I know of, so the best you'll likely do is use doOnSubscribe() (on the Mono object) to save one timestamp, and then create another timestamp as the first line of the run() method in that Runnable.
Those timestamps can then be compared to work out what, if any, lag is present.

Send TraceId across Threads

We have a distributed application following microservice Architecture. In one of our microservice we are following producer-consumer pattern.
The producer receives requests, persists it to database, pushes the request into a BlockingQueue and sends the response back to the client. The consumer running on a separate thread is listening to the blocking queue. The moment it gets the request object it performs specific operations on it.
The request received by the producer is persisted to the database asynchronously using CompleteableFutures.
The problem here is how to forward TraceId to the methods processing the requestObject inside consumer thread. Since the consumer thread might process these objects much later after the response is sent to the consumer.
Also how to forward the traceId across Asynchronous calls?
Thanks
That's an interesting question. I think that what you can do is to persist the request together with its headers. Then on the consumer side you can use the SpanExtractor interface in a similar way as we do here - https://github.com/spring-cloud/spring-cloud-sleuth/blob/v1.3.0.RELEASE/spring-cloud-sleuth-core/src/main/java/org/springframework/cloud/sleuth/instrument/web/TraceFilter.java#L351 (Span parent = spanExtractor().joinTrace(new HttpServletRequestTextMap(request));). That means that from the HttpServletRequest we're extracting values to build a span. Then, once you've retrieved the Span, you can just use Tracer#continueSpan(Span) method before processing, and then Tracer#detach(Span) in the finally block. E.g.
Span parent = spanExtractor().joinTrace(new HttpServletRequestTextMap(request));
try {
tracer.continueSpan(parent);
// do whatever you need
} catch(Exception e) {
tracer.addTag("error", doSthWithTheExceptionMsg(e));
} finally {
tracer.detach(parent);
}

batched message listener for spring (extending from DefaultMessageListenerContainer)

I have a basic JMS related question in spring.
Rather than having to consume a single message at a time, it would be convenient to batch messages for a short duration (say a few seconds) and process them in bulk (thereby doing things in bulk). I see that java only provides an onMessage call that gives a single message at a time. I came across BatchMessageListenerContainer which seems to do this exactly. The recipe is ported to spring-batch where it is being used.
I wanted to know if there are any fundamental problem in the approach itself? If there are no problems, we can propose to the spring folks to add this in the spring-jms artifact itself (without needing to resort to use spring-batch whatsoever).
Thanks!
If your need is to process the messages in parallel you can use DefaultMessageListenerContainer in your spring project without the necessity for spring batch. You set the attribute concurrent consumers to the number of partitions you want.
#Bean
public DefaultMessageListenerContainer messageListener() {
DefaultMessageListenerContainer listener = new DefaultMessageListenerContainer();
**listener.setConcurrentConsumers(Integer.valueOf(env.getProperty(JmsConstant.CONCURRENT_CONSUMERS_SIZE)));**
// listener.setMaxConcurrentConsumers(maxConcurrentConsumers);
listener.setConnectionFactory((ConnectionFactory) queueConnectionFactory().getObject());
listener.setDestination((Destination) jmsQueue().getObject());
listener.setMessageListener(this);
listener.setSessionAcknowledgeMode(Session.AUTO_ACKNOWLEDGE);
listener.setSessionTransacted(true);
return listener;
}
Otherwise, if you're using spring batch, you can use remote chunking and BatchMessageListenerContainer, you can find an example here https://github.com/spring-projects/spring-batch/tree/master/spring-batch-samples/src/main/java/org/springframework/batch/sample/remotechunking

How to use Reactor to dispatch events to multiple consumers and to filter events based on event data?

I'm evaluating Reactor (https://github.com/reactor/reactor) if it would be suitable for creating an event dispatching framework inside my Spring / enterprise application.
First, consider a scenario in which you have an interface A and concrete event classes B, C, and so on. I want to dispatch concrete events to multiple consumers i.e. observers. Those are registered to a global Reactor instance during bean post processing. However, you can register them dynamically. In most cases there is one producer sending events to multiple consumers at high rate.
I have used Selectors, namely, the ClassSelector to dispatch the correct event types to the correct consumers. This seems to work nicely.
Reactor reactor = ...
B event = ...
Consumer<Event<B>> consumer = ...
// Registration is used to cancel the subscription later
Registration<?> registration = reactor.on(T(event.getClass()), consumer);
To notify, use the type of the event as a key
B event = ...
reactor.notify(event.getClass(), Event.wrap(event));
However, I'm wondering if this is the suggested way to dispatch events efficiently?
Secondly, I was wondering that is it possible to filter events based on the event data? If I understand correctly, Selectors are only for inspecting the key. I'm not referring to event headers here but to the domain specific object properties. I was wondering of using Streams and Stream.filter(Predicate<T> p) for this but is it also possible to filter using Reactor and Selectors? Of course I could write a delegating consumer that inspects the data and delegates it to registered consumers if needed.
There is a helper object called Selectors that helps to create the various kinds of built-in Selector implementations. There you can see references to the PredicateSelector. The PredicateSelector is very useful as it allows you complete control over the matching of the notification key. It can be a Spring #Bean, an anonymous inner class, a lambda, or anything else conforming to the simple Predicate interface.
Optionally, if you have the JsonPath library in your classpath, then you can use the JsonPathSelector to match based on JsonPath queries.
In either of these cases you don't need to have a separate object for a key if the important data is actually the domain object itself. Just notify on the object and pass the Event<Object> as the second parameter.
MyPojo p = service.next();
reactor.notify(p, Event.wrap(p));

Resources