how is that possible that a REST Microservice can communicate with another Microservice which is a hybrid, which means he can communicate with REST and with a Message Queue. For Example an API-Gateway. For the outside world, he is able to communicate with an App, Mobilephone via REST but the communication from the backend is via message queue.
Use case:
My homepage wants to get a Vehicle from the database. He asks the API-Gateway via a GET-Request. The API-Gateway takes the GET-request and publishes it into the message queue. The other Microservice takes the message and publishes the result. Then the API-gateway consumed the result and send it back as a response.
How can I implement it? Am I using Spring boot with Apache Kafka? Do I need to implement an asynchronous communication?
(Sorry its german)
There are some approaches to solve this situation.
You might create topics for each client request and wait for the reply on the other side, e.g, DriverService would read the request message, fetch all your data and publish it to your client request topic. As soon as you consume the response message, you destroy that topic.
BUT 'temporary' topics might take too long to be delete(if no configuration avoids that, such as delete.topic.enable property) in a request-response interaction, and you need to monitor possible topics overgrowth.
Websocket is another possible solution. Your client would start listening to a specific topic, previously agreed with your server, then in a specific timeout you would wait for the response, when your DriverService would publish to that specific socket channel.
Spring Boot offers you great starters for Kafka and Websockets. If you are expecting a large amount of transactions, I would go with a mixed strategy, using Kafka to help my backend scale and process all transactions, then would respond to client via Websocket.
Related
I have an UI application (displays streaming) which makes a WebSocket connection to the Spring Boot microservice (multiple JVM'S) and this service forwards the request to one of the upstream servers and listens to the responses on a JMS queue coming from upstream server, which then response messages had to be returned to the socket.
Issue we are facing is since the socket is point to point, and the Spring Boot application is running on multiple instances which all are listening to the same JMS queue we are unable to serve the data back to the WebSocket when a message is received on a instance which the request to upstream wasn't made.
Here's the basic flow:
WebSocket -> instance1, instance2, instance3 -> Data provider
Instance1 made the request to data provider.
Data provider sends the data back to the queue
Instance 3 receives the message, but it doesn't have the socket connection to send the data back.
We had an interim solution using correlation id in JMS headers and selectors on the queue however now the data publisher is not able to provide the correlation id to depend on.
Does anybody have a better suggestion to address this?
Since you're using a request/reply pattern with JMS you must either use a correlation ID or a unique temporary queue for the response.
You indicated that, "the data publisher is not able to provide the correlation id to depend on." However, your application actually provides the correlation ID. The "data provider" in this case just needs to take it from the message it receives and put it into the response message. The process only requires 2 method calls by the "data provider" - javax.jms.Message.getJMSCorrelationID and javax.jms.Message.setJMSCorrelationID.
If the "data provider" can't do this then it's doubtful they will be able to accomplish the other option of using a unique temporary queue for the response. However, it's worth explaining in any case. When one of your "instance" servers sends the request message it first needs to use javax.jms.Session.createTemporaryQueue to create a temporary queue and then take the return parameter of that method and set it on the request message using javax.jms.Message.setJMSReplyTo. When the "data provider" receives the message they will get this value using javax.jms.Message.getJMSReplyTo and then send the response to this queue where the "instance" will then retrieve it.
These are the two generally accepted ways to implement a request/response pattern with JMS. I don't know of any other ways to implement such a pattern.
I have read a couple of articles which tells us about the communication between microservices, I have chosen the event based communication between microservices pattern, but now I am wondering how the client is supposed to communicate, if it sends a request to the API gateway, should it wait for a response (which might take time due to the event based nature of communication between the microsrvices internally) or should it say "processing" and do polling to check if the request was completed?
What is the standard practice for client --> api gateway --> microservices communication?
Most of the time you will find that Clients --> API Gateway --> Microservice communication is actually synchronous, which means that client would need to wait and block until a response is received. Typically it is implemented as a HTTP based call that the client fires to the API gateway and then reaches the microservice at the back. This doesn't seem to be the kind of event based communication that you are talking about.
The standard practice for event based communication would be something like : Client --> Event/Message Broker --> Microservice this is an asynchronous approach where the Client doesn't block/wait for a response. However, the client would need to have a back channel event handling process that is listening to the communication to handle the response that comes back from the microservice. Microservice --> Event/Message Broker --> Client.
I am using a microservice architecture for this current project using RabbitMQ as a message broker. My issue is determining the best possible way to make "requests" to the microservices and return back the eventual response, currently I have a socket.io socket running and connecting the browser client to that and sending events to the socket, the socket reads the events and publishes them into RabbitMQ, and of course they are then consumed by the services.
So my question:
Is my current setup good enough to just keep using or is there other ways that are better?
In microservice architecture, It is suggested that:
client app to API gateway communication should be synchronous (like
REST over http).
API gateway to micro-service communication should also be
synchronous
But service to service communication should be asynchronous.
Another rule you should try to follow, as much as possible, is to use
only asynchronous messaging between the internal services, and to use
synchronous communication (such as HTTP) only from the client apps to
the front-end services (API Gateways plus the first level of
microservices).
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/architect-microservice-container-applications/asynchronous-message-based-communication
Now, If I understood it right, when user requests to API gateway, and in turn it calls the fist service, it will return a acknowledgement (with some GUID) which will be passed to client application. But services will keep on executing the request.
Now the question pop ups, how will they notify the client application when the request is processed completely. One way is that client can check the status using the GUID passed to it.
But can it be done with some push notification? How can we integrate server to server push notification?
I have little bit different understanding on this as it says communication between services should be asynchronous while communication to API gateway and API gateway to service should be rest API.
so we don't need to do anything as these are simple API calls and pipeline will handle request-response tracking while asynchronous calls between services will increase the throughput of the service.
Now, If I understood it right, when user requests to API gateway, and in turn it calls the fist service, it will return a acknowledgement (with some GUID) which will be passed to client application. But services will keep on executing the request.
No, the microservices should not continue to execute the request, it is already finished. They will, when it is required, update their internal cache (local representation to be more precise) of the needed remote data (data from the microservice that executed the request) when that remote data has changed. The best way to do that update is using integration events (i.e. when a microservice executes a request that mutates the data, it publishes an event to the subscribed microservices).
The microservices should not communicate not even asynchronously in order to fulfill a request from the gateway or clients. They should use background tasks to prepare the data ahead of time for when a request comes.
You're depicting a scenario where the whole interaction between the system and external actors (to be rude, the users) follows an asynchronous model. This is perfectly reasonable, but just if you really need it. Matter of fact, if you are choosing to let 'the outside' interact with your system through REST APIs, maybe you don't need it at all.
If the system receives requests through a synchronous application endpoint, such as REST endpoint, it has to complete requests before to send a response, otherwise it would be meaningless. consider an API like
POST users/:username/notifications
a notification is synchronous by it's nature, but the the request just states that 'a new notification should be appendend to the notifications collection of user'. The API responds 201 that means 'ok, the notification is already associated with the user, it will be pushed on some channel, eventually'. This is a 'transactional' way to describe an asynchronous interaction
Another scenario comes when the user wants to subscribe the notification channel. I expect that this would be implemented with a bi-directional, asynchronous, pubsub communication protocol, such as websockets.
In both cases, however, doesn't matter how microservices communicate with each other, if the request is synchronous, the first service of 'the chain' should wait until is ready to respond. This is the reason beacause API gateway forwards the request in http.
On the other hand, aynchronous communication could be used to enforce consistency between services, instead of to make the actual communication. Let's say that the Orders service sends data to a broker. each time some attribute on the orders[orderId] is changed, it published the change in /orders/:orderId topic. At the same time, expose an internal http point. each service caches data from the services which depends on. The user service make a GET /orders/:orderId , while sends a response to the requester, puts the data in a local cache and subscribes the orders/:orderId topic. each time that a 'mutation' is sent on this topic, the User service catches it and applies the mutation on the corresponding cached object. The communication is syncrhonous, keeps to be synchronous and it' relatively simple to manage; at the same time your system can hold replicated data and be still [eventually] consistent
Let me start by describing the system. There are 2 applications, let's call them Client and Server. There are also 2 queues, request queue and reply queue. The Client publishes to the request queue, and the server listens for that request to process it. After the Server processes the message, it publishes it to the reply queue, which the Client is subscribed to. The Server application always publishes the reply to the predefined reply queue, not a queue that the Client application determines.
I cannot make updates to the Server application. I can only update the Client application. The queues are created and managed by the Server application.
I am trying to implement request/reply pattern from Client, such that the reply from the Server is synchronously returned. I am aware of the "sendAndReceive" approach with spring, and how it works with a temporary queue for reply purposes, and also with a fixed reply queue.
Spring AMQP - 3.1.9 Request/Reply Messaging
Here are the questions I have:
Can I utilize this approach with existing queues, which are managed and created by the Server application? If yes, please elaborate.
If my Client application is a scaled app (multiple instances of it are running at the same time), then how do I also implement it in such a way, that the wrong instance (one in which the request did not originate) does not read the reply from the queue?
Am I able to use the "Default" exchange to my advantage here, in addition to a routing key?
Thanks for your time and your responses.
Yes; simply use a Reply Listener Container wired into the RabbitTemplate.
IMPORTANT: the server must echo the correlationId message property set by the client, so that the reply can be correlated to the request in the client.
You can't. Unlike JMS, RabbitMQ has no notion of message selection; each consumer (in this case, reply container) needs its own queue. Otherwise, the instances will get random replies and it is possible (highly likely) that the reply will go to the wrong instance.
...it publishes it to the reply queue...
With RabbitMQ, publishers don't publish to queues, they publish to exchanges with a routing key. It is bad practice to tightly couple publishers to queues. If you can't change the server to publish the reply to an exchange, with a routing key that contains something from the request message (or use the replyTo property), you are out of luck.
Using the default exchange encourages the bad practice I mentioned in 2 (tightly coupling producers to queues). So, no, it doesn't help.
EDIT
If there's something in the reply that allows you to correlate it to a request; one possibility would be to add a delegating consumer on the server's reply queue. Receive the reply, perform the correlation, route the reply to the proper replyTo.