How to implement request response mechanism in DDS? - client-server

I am new to DDS,I am using CyclonesDDS c++ packages
as I know DDS implementation mechanism is published/subscribe
is it possible to perform a request-response in DDS? like in clenet-server applications?
A client pc will request data then a central pc will respond with data
should I implement it logically in the program? is following way is the optimum method
client PC -> request logic -> client PC publish to a requestData topic
central PC -> waiting for requst topic -> central PC subscribe requestData check if data is requesting
central PC -> response logic -> central PC publish data to a dataWrite topic
client PC -> waiting for data topic -> central PC subscribe dataWrite topic and read data
are there callback functions to perform this?

Yes, it is possible to implement request-response logic over DDS. In fact, the OMG RPC Over DDS specification defines a Remote Procedure Calls (RPC) framework using the basic building blocks of DDS. That seems to provide what you are looking for.
For some concrete documentation provided by a vendor that implements this kind of logic as part of their product, you could check out the RTI Connext User's Manual Part 4: Request-Reply Communication Pattern.

Related

Vertx with ZeroMq architecture

I have two apps that run on the same server. One is a c++ app and another is a java web server running on top of vertx. The webserver wants to send request to the C++ part and obtains response. ZeroMq seems a performing solution to do the inter process communication. And it exists a bridge to vertx (https://github.com/dano/vertx-zeromq), but no so well documented.
I'm wondering if what i think can be done with this bridge:
C++ zeroMq socket type is a dealer, it registers to the event bus by sending the appropriate message that contains the handler adress.
Webserver send data to the socket event bus handler address and get response in its callback.
Does it have an opportunity to work or i misunderstand the zeroMq bridge ?
That sounds correct to me but you don't need ZeroMQ - you can just used regular TCP - https://vertx.io/docs/vertx-tcp-eventbus-bridge/java/ and that has good documentation and support.
I'm currently looking into the benefits of using ZeroMQ for my project and suspect it is useful for more complex topologies like "broadcasting an event without knowing who wants it (don't require handlers to register)" but Vertx doesn't support this from what I can see.

Async response from API Gateway in microservices

In microservice architecture, It is suggested that:
client app to API gateway communication should be synchronous (like
REST over http).
API gateway to micro-service communication should also be
synchronous
But service to service communication should be asynchronous.
Another rule you should try to follow, as much as possible, is to use
only asynchronous messaging between the internal services, and to use
synchronous communication (such as HTTP) only from the client apps to
the front-end services (API Gateways plus the first level of
microservices).
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/architect-microservice-container-applications/asynchronous-message-based-communication
Now, If I understood it right, when user requests to API gateway, and in turn it calls the fist service, it will return a acknowledgement (with some GUID) which will be passed to client application. But services will keep on executing the request.
Now the question pop ups, how will they notify the client application when the request is processed completely. One way is that client can check the status using the GUID passed to it.
But can it be done with some push notification? How can we integrate server to server push notification?
I have little bit different understanding on this as it says communication between services should be asynchronous while communication to API gateway and API gateway to service should be rest API.
so we don't need to do anything as these are simple API calls and pipeline will handle request-response tracking while asynchronous calls between services will increase the throughput of the service.
Now, If I understood it right, when user requests to API gateway, and in turn it calls the fist service, it will return a acknowledgement (with some GUID) which will be passed to client application. But services will keep on executing the request.
No, the microservices should not continue to execute the request, it is already finished. They will, when it is required, update their internal cache (local representation to be more precise) of the needed remote data (data from the microservice that executed the request) when that remote data has changed. The best way to do that update is using integration events (i.e. when a microservice executes a request that mutates the data, it publishes an event to the subscribed microservices).
The microservices should not communicate not even asynchronously in order to fulfill a request from the gateway or clients. They should use background tasks to prepare the data ahead of time for when a request comes.
You're depicting a scenario where the whole interaction between the system and external actors (to be rude, the users) follows an asynchronous model. This is perfectly reasonable, but just if you really need it. Matter of fact, if you are choosing to let 'the outside' interact with your system through REST APIs, maybe you don't need it at all.
If the system receives requests through a synchronous application endpoint, such as REST endpoint, it has to complete requests before to send a response, otherwise it would be meaningless. consider an API like
POST users/:username/notifications
a notification is synchronous by it's nature, but the the request just states that 'a new notification should be appendend to the notifications collection of user'. The API responds 201 that means 'ok, the notification is already associated with the user, it will be pushed on some channel, eventually'. This is a 'transactional' way to describe an asynchronous interaction
Another scenario comes when the user wants to subscribe the notification channel. I expect that this would be implemented with a bi-directional, asynchronous, pubsub communication protocol, such as websockets.
In both cases, however, doesn't matter how microservices communicate with each other, if the request is synchronous, the first service of 'the chain' should wait until is ready to respond. This is the reason beacause API gateway forwards the request in http.
On the other hand, aynchronous communication could be used to enforce consistency between services, instead of to make the actual communication. Let's say that the Orders service sends data to a broker. each time some attribute on the orders[orderId] is changed, it published the change in /orders/:orderId topic. At the same time, expose an internal http point. each service caches data from the services which depends on. The user service make a GET /orders/:orderId , while sends a response to the requester, puts the data in a local cache and subscribes the orders/:orderId topic. each time that a 'mutation' is sent on this topic, the User service catches it and applies the mutation on the corresponding cached object. The communication is syncrhonous, keeps to be synchronous and it' relatively simple to manage; at the same time your system can hold replicated data and be still [eventually] consistent

Scout Eclipse Server - Client communication

I know how to make communication in client -> service direction, but I need communication in direction service -> client. I would need some observer on client or some way that I can call client from server.
Marko
I think you can use Client Notifications for that. Check the client notification how-to page for a raw description of what you need to add to your application.
I never used them myself. Be aware that they are not a solution for all use cases.

Which common api to be used to send data from server when server is in source mode

I am writing mirrorlink-server data services(CDB/SBP) module.
However, all the common APIs of Data Services (8 manager and 5 listener) seem to be useful only when server is in sink mode.
When I will get a GET, SET or SUBSCRIBE command from client, which common API is to be used to notify the server application about the request so that application can fill the object and response can be sent back to the client?
Otherwise, what data will server send as response if it can't notify to the server application?
The phone (server) can be data source for the head unit (client) but support is optional for both the phone and the head unit. There aren't many phones or head units that support it at this time.
The Common API does not (currently) define how to provide a data object as a source. This will be defined in a future revision as an additional service module.

Can one say an architecture using websocket technology is based on client-server model?

Can one say an architecture using websocket technology is based on client-server model?
By definition The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients.
However using the websocket technology, two endpoints can both act as providers of a resource or service and also service requesters.
Say for example in a situation where the two endpoints are: a user device with a gps sensor and a computer machine, both connected in the network using websocket. And the computer machine is sending requests to obtain the current position of the user device (here the user device is acting as a resource provider and the computer machine as a requester). Later on the user device uses the websocket connection to request all its positions on the last 5 days to the computer machine (now the user device is acting as the requester and the computer machine as the resource provider).
If both devices can act as resource provider and requester, are they complying with the client-server model definition or not?
No it's not breaking anything. End Points are not devices they are connections between devices.
ie if we were asking each other questions and answering them
There would two connections between two 'devices' giving four endpoints. You to me and me to you. No conflict.
TCP is full duplex capable, and particularly WebSockets are full duplex. As #Tony Hopkinson pointed out, there is no conflict at all. This means, you can write and read at the same time.
WebSockets are push technology, more suited for events; while usual request-response models are pull technology.
You can have both client-server or peer to peer architectures with push approach, but pull is the normal choice for pull architectures.
Peer-to-peer Architecture: A peer-to-peer network is designed
around the notion of equal peer nodes simultaneously functioning as
both "clients" and "servers" to the other nodes on the network. This
model of network arrangement differs from the client–server model
where communication is usually to and from a central server. A typical
example of a file transfer that uses the client-server model is the
File Transfer Protocol (FTP) service in which the client and server
programs are distinct: the clients initiate the transfer, and the
servers satisfy these requests.
You can also provide a mix of peer-to-peer and client-server. For example, you can do requests via WebSocket, and at the same time, the server could send updates on its own initiative. I don't understand what you mean with "breaking the model". WebSocket is just a communication channel. In your app both models can coexists and use the same communication channel.

Resources