what is the use case of vertx multi verticle in single microservice? - microservices

In most of the web the applications built in the vertx, I have seen that in a single microservice people create two verticles.
One is rest verticle to handle HTTP requests.
Another is to dao verticle to communicate to the database.
Whenever there is any api request, the HTTP verticle communicates with dao verticle via event bus.
But given that vertex is single-threaded, what is the benefit of creating two different verticles here. There would be unnecessary overhead of communication over the event bus, whereas I can create only one verticle which handles both rest and i/o.
I can understand the case of having a separate worker verticle in case of blocking calls. But in the case of non-blocking, i/o calls what is the use case of it?

Vert.x is not single-threaded. It uses a multi-reactor pattern:
In a standard reactor implementation there is a single event loop
thread which runs around in a loop delivering all events to all
handlers as they arrive.
The trouble with a single thread is it can only run on a single core
at any one time, so if you want your single threaded reactor
application (e.g. your Node.js application) to scale over your
multi-core server you have to start up and manage many different
processes.
Vert.x works differently here. Instead of a single event loop, each
Vertx instance maintains several event loops. By default we choose the
number based on the number of available cores on the machine, but this
can be overridden.
This means a single Vertx process can scale across your server, unlike
Node.js.
So by running multiple Verticles, you can have your services spread across multiple threads/CPU cores.

Related

Scaling a microservice with frontend and backend instances

I am developing a series of microservices using Spring Boot and plan to deploy them on Kubernetes.
Some of the microservices are composed of an API which writes messages to a kafka queue and a listener which listens to the queue and performs the relevant actions (e.g. write to DB etc, construct messsages for onward processing).
These services work fine locally but I am planning to run multiple instances of the microservice on Kubernetes. I'm thinking of the following options:
Run multiple instances as is (i.e. each microservice serves as an API and a listener).
Introduce a FRONTEND, BACKEND environment variable. If the FRONTEND variable is true, do not configure the listener process. If the BACKEND variable is true, configure the listener process.
This way I can start scale how may frontend / backend services I need and also have the benefit of shutting down the backend services without losing requests.
Any pointers, best practice or any other options would be much appreciated.
You can do as you describe, with environment variables, or you may also be interested in building your app with different profiles/bean configuration and make two different images.
In both cases, you should use two different Kubernetes Deployments so you can scale and configure them independently.
You may also be interested in a Leader Election pattern where you want only one active replica if it only make sense if one single replica processes the events from a queue. This can also be solved by only using a single replica depending on your availability requirements.

Understanding the MajorDomo Pattern from NetMQ ZeroMQ

I am trying to understand how to best implement the MDP example in c# to be used in a windows service in a multiple client - single server environment.
I have read the docs but I am still unclear on the following:
Should all Worker instances be created on startup and left to run?
Should the Workers all be different types of services or just different instances of the same service?
Can I have one windows service when contains the Broker and Workers or is it best to split them out into their own services?
The example code I am using is the MajorDomo Pattern taken from here https://github.com/NetMQ/Samples
Yes, all workers in a MDP environment should be created independently of the requests, since the broker should not know how to create them
Each worker handles a given "service" (contract). Obviously each contract should have at least one worker.
If you need parallelized handling of requests, and a given worker can only do one at a time, having extra workers for that service could make sense. Generally you would do this if multiple machines were involved however (horizontal scaling)
You can have the broker and workers in the same process. HOWEVER, if you want to update only a worker, taking down the broker at the same time can be annoying for the clients. I would recommend letting the broker be its own process, with the workers in one or more other processes.

Can I call same RPC func in many servers at the same time?

I try to find some fast algorithm of interprocess communication.
One of I need is an ability to send one command to multiple application instances at the same time. I had tried to find out for a day if I am able to start many instances of the same app (local-rpc-server-app) and call RPC from one client. I use ncalrpc protocol for this purpose.
I just want to start several instances of server and one instance if client, and then call the same RPC func one time on a client to evaluate this RPC func on every running server.
Yes, you can either use multiple client threads (each making a separate server call) or modify the .acf and mark the call with the [async] attribute. If you go the latter route you can then make multiple calls on a single client thread. Note that asynchronous RPC is a fair bit more complicated than synchronous RPC due to needing to deal with call completions.
Making calls to multiple server instances (even local instances) is also made more complicated by the fact that you will have to somehow discover those endpoints, and the RPC namespace functions (RpcNs*) are no longer available as of Windows Vista.

Vert.x cluster Eventbus cross processes

Does any body have some info, links, pointer on how is cross process Eventbus communication is occurring. Per documentation I am concluding that multiple Vert.x (thus separate JVM processes) could be clustered on and communicate via Eventbus. However, there are little to none documentation on how to achieve it.
Looking into DOCs, I can see that publish/registerHandler methods take address as a String what works within a process, but I can not wrap my head around on how it works cross processes and how to register and publish to address, does it work over HTTP , TCP ? From API perspective do I need to pass port and process signature ?
Cross process communication happens via the EventBus. Multiple vertx instances can be started up and clustered to allow separate instances on the same or other machines to communicate. The low level clustering is handled by Hazelcast.The configuration is handled by the cluster.xml file in the conf folder of your vertx install. You can learn more about the format of the file by looking at the Hazelcast Docs. It is transparent to your handers and works over TCP.
You can test it by running two or more instances on your local machine once they are started with the -cluster flag. Look at the example being run, and the config changes required in How to use eventbus messaging in vertx?

Synchronous request-reply pattern in a Java EE container

I am looking to implement an synchronous request-reply pattern using JMS inside a Java EE container. The sequence would be something like this
Browser makes a request to web application for data. This is a blocking request (say on thread T1).
The web app needs to connect to a remote web service to fulfill the above request. So it forms a request and places it on a queue (with a reply-to queue also declared).
The remote service processes the requests and places the response on to the reply-to queue declared in step 2
The response is read from the reply-to Q in the web app and made available to the blocking thread T1 of step 1.
I have followed the answer provided by T.Rob (How to match MQ Server reply messages to the correct request)
QueueReceiver queueReceiver =
session.createReceiver(destination, "JMSCorrelationID='customMessageId'");
TextMessage receivedMessage = (TextMessage)queueReceiver.receive( 15000 );
Is the above solution valid when running in a Java EE container (web module) where there could be multiple concurrent requests coming in?
This depends on the perception of "valid": It will probably compile and work. But from the design perspective, one could say that you can really improve it.
If your thread is blocking, any asynchronous communication won't add any value. Instead it will make it slow, it will consume resources, and it might even create trouble (see link below).
Whatever service is exposed by the the system processing the messages (possibly an MDB), extract it into a separate service class, and provide another frontend in the shape of a stateless session bean. So your service is exposed both by an sync and async interface, and the client can choose.
In your scenario your servlet just calls an EJB synchronously.
As for the problems which may happen otherwise: Have a look at JMS request/response pattern in transactional environment (this approach uses a temporary queue).
Using a single queue (the way you have quoted in your question), you need a selector (the condition) to get relevant messages: This might be slow, depending on the volume in the queue.
On the other hand, if you implement your servlet with asynchronous support as well (using #WebServlet(asyncSupported = true)), it's something different. In that case I would say it's a valid approach.
In that scenario you can save resources (namely threads; but the HTTP connections remain open), because one background thread listening on a queue can serve multiple clients. Consider this if you have performance or resource problems. Until then I suggest the synchronous way, because it is easier to implement.
The JMS Request/Reply of the EAI Patterns might fit for you.
It's well explained and there's also samples in Java:
http://www.enterpriseintegrationpatterns.com/patterns/messaging/RequestReplyJmsExample.html

Resources