I'm starting SpringBoot WebFlux project, but what if I use usual ( non-reactive) jdbc driver? Will the whole application stop to be reactive?
No.
C.1. How Do I Wrap a Synchronous, Blocking Call?
It is often the case that a source of information is synchronous and blocking. To deal with such sources in your Reactor applications, apply the following pattern:
Mono blockingWrapper = Mono.fromCallable(() -> {
return /* make a remote synchronous call */
});
blockingWrapper = blockingWrapper.subscribeOn(Schedulers.boundedElastic());
Create a new Mono by using fromCallable.
Return the asynchronous, blocking resource.
Ensure each subscription happens on a dedicated single-threaded worker from Schedulers.boundedElastic().
You should use a Mono, because the source returns one value. You should use Schedulers.boundedElastic, because it creates a dedicated thread to wait for the blocking resource without impacting other non-blocking processing, while also ensuring that there is a limit to the amount of threads that can be created, and blocking tasks that can be enqueued and deferred during a spike.
Note that subscribeOn does not subscribe to the Mono. It specifies what kind of Scheduler to use when a subscribe call happens.
Yes.
You will either lead to blocking due to threadpool exhaustion, or if its unbounded, resource exhaustion.
In addition, there is no such thing as backpressure in non reactive drivers a key tenet of reactive programming.
Related
Here is the following scenario. I have a Event Producer which publishes events. I referred the mircosoft document https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-java-get-started-send
According to my usecase, I have a bean which would create the eventhubProducerClient connection at start of my application. However , the producer.close() in my send method (from above documentation) is called after each event is sent. So, this leads to close of my producer and when I would like to send the next event, there is already an exception that the producer is terminated.
What is the best way to handle the producer.close (). can I leave the producer open ? Wouldnt that cause a memory leak ? Is there a strategy on how I can handle this ?
any lead would be helpful
thank you
Each of the Event Hubs client types is safe to cache and use as a singleton for the lifetime of the application, which is best practice when events are being published or read regularly. The clients are responsible for efficient management of network, CPU, and memory use, working to keep usage low during periods of inactivity. Calling close on a client is required to ensure that network resources and other unmanaged objects are properly cleaned up.
I'm using Retrofit with Reactor Adapter on a server. I thought that it allows me to make my calls unblocking by simple using Mono for retrofit and exposing it to Spring Boot as Mono as well (all operations in between are reactive).
However I noticed that when I try to run a few requests to an another service, that has some long operations inside (taking a few seconds), my service looks like it's blocking some thread, as when making several quick calls in proper configuration it can cause restarting my service (which is supposed to only wait for another service response, and data set it receives is small). Also some tracking tools make me think that my threads are busy when waiting for the response.
I tried to find some docs about that, and looked a bit into OkHttp and Okio code, and I couldn't find any part that could make it non blocking, and what's more it looked like it would be blocking.
Is there something I might miss in my retrofit configuration that could make my calls non blocking, or maybe someone is aware there is no way to make retrofit work this way? Or simply am I misinterpreting some data and it does should be non blocking by default?
I add to my Retrofit builder such setup method to enable Reactor:
addCallAdapterFactory(ReactorCallAdapterFactory.create())
OkHttp follows a roughly thread per connection model. So with a HTTP/2 server a single connection can support a variable number of requests with a fixed set of pooled connections and threads.
call.execute() will be blocking.
call.enqueue(...) will be non blocking but is using threads internally, and reading from the socket in blocking mode. This is hidden from clients, but OkHttp does not use Java NIO.
In most of the web the applications built in the vertx, I have seen that in a single microservice people create two verticles.
One is rest verticle to handle HTTP requests.
Another is to dao verticle to communicate to the database.
Whenever there is any api request, the HTTP verticle communicates with dao verticle via event bus.
But given that vertex is single-threaded, what is the benefit of creating two different verticles here. There would be unnecessary overhead of communication over the event bus, whereas I can create only one verticle which handles both rest and i/o.
I can understand the case of having a separate worker verticle in case of blocking calls. But in the case of non-blocking, i/o calls what is the use case of it?
Vert.x is not single-threaded. It uses a multi-reactor pattern:
In a standard reactor implementation there is a single event loop
thread which runs around in a loop delivering all events to all
handlers as they arrive.
The trouble with a single thread is it can only run on a single core
at any one time, so if you want your single threaded reactor
application (e.g. your Node.js application) to scale over your
multi-core server you have to start up and manage many different
processes.
Vert.x works differently here. Instead of a single event loop, each
Vertx instance maintains several event loops. By default we choose the
number based on the number of available cores on the machine, but this
can be overridden.
This means a single Vertx process can scale across your server, unlike
Node.js.
So by running multiple Verticles, you can have your services spread across multiple threads/CPU cores.
Sorry, if it is a duplicate question.
I have a legacy web application which uses Queues (yes. normal Java Queue) and custom polling (every 500ms). A REST web service (/message) will be called, which will return the message if any otherwise empty string.
My need: If any message is available in Queue, in Real-Time, the client should get the message. So I can save 500ms.
Is there any advantage to moving to JMS from current approach? From this link JMS MessageConsumer's messageListener makes push or pull? it seems, MessageListener (process is asynchronous) uses polling which is no different from current approach.
If it is vendor based, how HornetQ/ActiveMQ supports MessageListener?
EDIT:
The queue is used for integration of two systems. A web app & standlone java program.
Either receive or a MessageListener will be asynchronous and will be called as soon as you receive a message.
you could control the pre-fetch size of your client.
Now, if all you need is to avoid the delay of poling every 500 ms, using a Queue system may be an overkill? It's perfect fine to use java.util.Queue (or any other subclass).
If all you need is to block until an element of a java.util.Queue is available, and you don't need distributed messaging, persistence or anything like you could simply using BlockingDequeue and your thread would unblock as soon as you have a message..
Look at this:
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingDeque.html
The Async MessageListener is implemented using a push based model. In ActiveMQ the broker sends a number of messages to the client based in it's set prefetch value so that messages are ready for consumption. Whether or not this helps with your particular use case is a question you need to answer for yourself.
I am looking to implement an synchronous request-reply pattern using JMS inside a Java EE container. The sequence would be something like this
Browser makes a request to web application for data. This is a blocking request (say on thread T1).
The web app needs to connect to a remote web service to fulfill the above request. So it forms a request and places it on a queue (with a reply-to queue also declared).
The remote service processes the requests and places the response on to the reply-to queue declared in step 2
The response is read from the reply-to Q in the web app and made available to the blocking thread T1 of step 1.
I have followed the answer provided by T.Rob (How to match MQ Server reply messages to the correct request)
QueueReceiver queueReceiver =
session.createReceiver(destination, "JMSCorrelationID='customMessageId'");
TextMessage receivedMessage = (TextMessage)queueReceiver.receive( 15000 );
Is the above solution valid when running in a Java EE container (web module) where there could be multiple concurrent requests coming in?
This depends on the perception of "valid": It will probably compile and work. But from the design perspective, one could say that you can really improve it.
If your thread is blocking, any asynchronous communication won't add any value. Instead it will make it slow, it will consume resources, and it might even create trouble (see link below).
Whatever service is exposed by the the system processing the messages (possibly an MDB), extract it into a separate service class, and provide another frontend in the shape of a stateless session bean. So your service is exposed both by an sync and async interface, and the client can choose.
In your scenario your servlet just calls an EJB synchronously.
As for the problems which may happen otherwise: Have a look at JMS request/response pattern in transactional environment (this approach uses a temporary queue).
Using a single queue (the way you have quoted in your question), you need a selector (the condition) to get relevant messages: This might be slow, depending on the volume in the queue.
On the other hand, if you implement your servlet with asynchronous support as well (using #WebServlet(asyncSupported = true)), it's something different. In that case I would say it's a valid approach.
In that scenario you can save resources (namely threads; but the HTTP connections remain open), because one background thread listening on a queue can serve multiple clients. Consider this if you have performance or resource problems. Until then I suggest the synchronous way, because it is easier to implement.
The JMS Request/Reply of the EAI Patterns might fit for you.
It's well explained and there's also samples in Java:
http://www.enterpriseintegrationpatterns.com/patterns/messaging/RequestReplyJmsExample.html