Can anyone explain the request-reply broker zeromq example? - zeromq

I'm refering to the 'A Request-Reply Broker' in the Zeromq documentation: http://zguide.zeromq.org/chapter:all
I'm getting the general gist of the app: it acts like an intermediary and routes messages from the client to the server and back again.
What I'm not getting though is how it makes sure the correct response from a server is sent to the correct client which originally made the request. I don't see anything in the code example which makes sure about this.
Now in the example they only send 1 message (hello) and 1 response (world), so even if messages are mixed up it doesn't matter, but I'm guessing that the testclient and server are kept deliberately simple.
Any thoughts are welcome...

All zeromq sockets implicitly have an identity associated with them. (You can obtain this identity with zmq_getsockopt().)
For bi-directional socket types not XREQ or XREP, this identity is automatically transferred as part of every message sent over the socket. The REP socket uses this identity to route the response message back to the appropriate socket. This has the effect of automatic routing.
Under the hood, identities are transferred via multipart messages. The first message in a multipart message will contain the socket identity. An empty message will follow, followed by all messages specified by the user. The REQ and REP sockets deal with these prefixed messages automatically. However, if you are using XREQ or XREP sockets, you need to populate these identity messages yourself.
If you search for "identity" on the ZMQ Guide, you should find all the details you will ever want to know about how identities and socket routing works.

Ok in chapter 3 they all of a sudden explain that there is an underlying concept of an 'envelope' which the req/resp pattern invisubly uses.
This explains how it works.

Related

Basic Dealer-Router Socket not working

I'm trying to implement the basic DEALER - ROUTER socket in ZeroMQ.
My Question has multiple parts.
Before that, here are my sample scripts
DEALER SCRIPT
ROUTER SCRIPT
QUESTION -
Firstly,The vanilla DEALER SCRIPT of mine is unable to read the message from the SOCKET.
Secondly, When I'm implementing a DEALER or ROUTER PATTERN, is it mandatory to pass the IDENTITY across(as a part of header) i.e can't the message be sent without any IDENTITY.
In other words can a DEALER - ROUTER pattern (can be see below) can co-exists and pass message among themselves without sending identity info in header.
DEALER WITHOUT ANY IDENTITY
ROUTER WITHOUT ANY IDENTITY
because, I'm unable to get it working without the identity as well.
NOTE : - The Zeromq ruby library(ruby client) currently in picture is ffi-rzmq
Your code shows a lot of misunderstandings about how ZMQ works, I suggest you read the guide and follow the Ruby examples to set up your scripts.
Here's the problems that I see:
In your DEALER script, you explicitly receive the identity - it will never get its own identity as part of the message, this is silently removed by ZMQ because it's not intended to be message data, it's intended to be an "address" used by the ROUTER socket. So, you're actually receiving the delimiter into your identity variable, the message into your delimiter variable, and then nothing is left and your msg variable is empty. If you puts the values of all three variables, you'll see it.
You don't need a ZMQ poller in your DEALER socket. Pollers are intended to receive messages from multiple sockets, you're only using one socket. I don't know whether it's actually intended to work with one socket at all, but at any rate it's needless additional complexity, rip it out. See here for a simple send/receive example from the guide (if you just change the socket type to DEALER, add your "particulars" - identity, address, port, etc - and omit the send, it should work for you)
In your second example, where you don't set an identity, the ROUTER socket doesn't address the message to any connected client - you always need to send the client identity as the first frame of the message. Typically, you'll receive a message from your client, which includes its identity, and you'll use that identity to send the message back. You're only able to skip that in the first example because your script already knows the identity, "client"

How do I get a Faye client given a client ID?

Faye allows you to monitor various events, such as handshake or subscribe. These callback blocks are only supplied the client_id value rather than the client itself. For example:
server = Faye::RackAdapter.new(mount: '/faye', timeout: 45)
server.bind(:handshake) do |client_id|
puts "Received handshake from #{client_id}"
end
How can I access the client given the client_id? Or how can I access more information in the handshake, such as cookies provided in the request header (if that info is even available)?
I think my original question is based upon a lack of understanding on how Faye works in two regards. Instead of deleting my question, I'm going to answer it for anyone else who comes across this with a similar question. (If my answer is wrong in any way, please comment or edit!)
First, at no point is access to the connected client available due to the way Faye is implemented with regards to the Bayeux protocol. All communications are carried out via channel broadcasting, meaning all connections listening to a channel will receive the message being sent.
Second, the code I pasted in the question deals with monitoring. What I'm really looking for is an extension.
In order to achieve authentication given my original question, I need to pass whatever authentication value is needed (whether it's a cookie value, auth token, etc.) as part of the message['ext'] value (per the example on the extensions page). Then, on the server side, I need to listen for messages on the /meta/handshake channel, setting message['error'] to some value in the case of value.

ZMQ REP, knowing who send the request

I m currently using zmq with python. Server is using REP socket.
Do I have a way, when recv a message, to know who send it ? If a receive 2 messages, I just need to know if they come from the same user or not, so an uid for example would be enough.
It looks like you want to implement async request handling on the server side: you let the server accept requests, process them asynchronously, and send the responses back to clients whenever the response data is available for each request. Now of course: how would you know, after you're done processing a request, which client to send it back to?
With simple REP sockets, ZMQ makes sure you won't run into this kind of problem by enforcing a recv() -> send(), recv() -> send() sequentiality. In other words, after you do a recv() on a REP socket, you must do a send() before recv()ing from it again. The response will be sent back to the client you got the message from, and there's no doubt about client's address because it's only one client at a time.
But this doesn't really help when you want to parallelize the request handling, does it? There are many cases when the behavior of REP is too restrictive, and that's exactly what Multipart messages and ROUTER (or XREP) sockets are for. XREP breaks the recv() -> send() lockstep of REP, but that causes a problem as we saw earlier - how do you know which client to send the reply back to, if multiple clients are connected? In order to make this work, XREP in ZMQ adds a message part to the front of a message, like an envelope, that includes the identity of the connection that it recv()'d the request from.
There's a whole chapter in the ZMQ Guide about the advanced Request-Reply patterns. You can also find an example for handling async requests here and a good short explanation of the ZMQ connection handling here.
Reading http://zguide.zeromq.org/page%3aall#Transient-vs-Durable-Sockets, you can only get the identity of the socket you're working with... not the socket of any peers you're connected to.
This being said, just build the sender information into the message. This should be trivial to do (either with a UUID or specific name per client).

How to handle different (url) websocket connections in netty

Websocket example in netty (examples) has a http request handler which:
performs hand shaking (at first)
(then) handles different types of WebSocket frames, eventually "TextWebSocketFrame"s.
There is only one url for websocket connections in this example.
The problem is, when TextWebSocketFrame based actual websocket communication starts, there is no direct way to determine websocket url from TextWebSocketFrames themselves (correct me if I am wrong).
So, how to handle different (url) websocket connections in netty?
One solution can be registering channels and their "websocket connection urls" during handshaking process.
The other is having only one websocket connection url and resolving different contexts by adding extra information to websocket messages (TextWebSocketFrames).
I don't find these solutions elegant, so any ideas?
It is my understanding that when you perform a web socket handshake, it is to a specific URL. That is specified in the web socket standard. See RFC 6455. Hence, there is no URL information in the TextWebSocketFrame because the assumption is that the frame will be sent to the URL to which the socket is bound.
To handle different URLs, you will have to either:
Setup a different pipeline and bind to a different IP and/or port for each URL, or
Like you stated, customise the hand shake and store the URL with the channel.
Personally, I've just used JSON in a TextWebSocketFrame. In my JSON, I have a field that states the intended action. This field is used for routing to the appropriate message handler.
I think it comes down to a design decision. WebSockets are intended for long lived connections where a request message can have 0, 1 or > 1 responses. This contrasts the REST style 1 request and 1 responses model.
Hope this helps.
The question "how to handle different (url) websocket connections in netty" does not make sense, I presume that the author meant to ask "how to serve multiple different websocket paths on a single port:host".
The question is valid because the HTTP protocol, (at least version 1.1,) WebSockets, and web browsers all support this scenario:
Client connects to server and the two start exchanging HTTP request/response pairs.
Client sends the HTTP request to upgrade to WebSocket, server honors it, and now a WebSocket is established between client and server.
The original HTTP connection remains open, so client and server can continue exchanging HTTP request/response pairs in parallel to the WebSocket. (In light of this, the term "upgrade" is a misnomer, because the connection is not upgraded at all; instead, a new connection is established for the WebSocket.)
Since the HTTP connection is still available, the client can send another HTTP upgrade request, thus creating another WebSocket. On the client side, it would look like this:
socket1 = new WebSocket( "https://acme.com:8443/alpha" );
socket2 = new WebSocket( "https://acme.com:8443/bravo" );
However, you can't have that, because Netty in all its magnificent glory and terrifying complexity does not exactly support that, and this is true even now, 10 years after the question was asked.
That's because:
Only one ServerBootstrap can bind to a given port on a given host.
(That's how the socket layer works.)
A ServerBootstrap can only have one "Child Handler".
(ServerBootstrap.childHandler() silently fails to report an error if you invoke it twice, but only the last invocation takes effect.)
A ChannelPipeline can only have one WebSocketServerProtocolHandler.
(Only the first WebSocketServerProtocolHandler that you add works, and Netty silently fails to issue an error if you add more.)
A WebSocketServerProtocolHandler accepts one and only one webSocketPath.
So, there you have it, a port:host can only have one webSocketPath, and that's a Netty limitation.
It might be possible to overcome this limitation by rewriting WebSocketServerProtocolHandler, but #aintNoBodyGotNoTimeFoDat.
Luckily, Netty does support another feature which makes it possible to achieve a similar thing. The constructor of WebSocketServerProtocolHandler supports a poorly documented and poorly named checkStartsWith parameter which, if set to true, will cause the handler to honor websocket negotiation requests not only on the given webSocketPath but also for any webSocket path that starts with the given webSocketpath and continues with a '?' or a '/' followed by other stuff. So, the code on the client would then look like this:
socket1 = new WebSocket( "https://acme.com:8443/allWebSocketsHere/alpha" );
socket2 = new WebSocket( "https://acme.com:8443/allWebSocketsHere/bravo" );
If you decide to build your netty server to handle this, the next problem you will face is how to obtain the "/allWebSocketsHere/alpha" and "allWebSocketsHere/bravo" parts. Luckily, someone else has already figured that out, see "Netty: How to use query string with websocket?" https://stackoverflow.com/a/47897963/773113

JMS / MQ confidentiality between clients

I'm designing a system where one server must send messages to lots of independent clients. The clients doesn't know about each other and should not be able to consume, peek or in any other way acquire knowledge about each others messages.
I therefore wonder if JMS / ActiveMq have the ability to control which clients get which messages?
I want all the clients to connect to the same JSM provider (the 'destination') and consume only messages meant for them. This would be a simple setup from the servers point of view.
An alternative would be to acquire webservice endpoints from all the clients and perform ws-calls every time the server have a message for a client. I think this alternative sound 'wrong' as I think ws calls are bloated. There is a great overhead for each ws call, and this server would have to make 1000's of call each day. In my opinion this would be suboptimal for the server...
Short answer: Use Message selector.
Detail answer:
The question doesn't mention about how conversation is initiated. So here my answers for both scenarios.
a) If client initiates the conversation (i.e. Client sends a message to server and waiting for a reply).
This is a request/reply scenario. Messaging/JMS is a decoupled communication system. But request/reply is a common pattern in JMS. It can be implemented using correlation pattern.
A unique identifier(correlation id) is sent part of the request message.
Server receives the message and sets the correlation id in the reply message.
Client uses Message selector to receive the message with the correct correlation id.
b) If server initiates the conversation (i.e. Server sends messages to the clients without client request).
In this case, similar approach can be used.
A fixed client id is assigned to each client.
Server maintains all client ids and sets client id of the recipient as correlation id of the message.
Client uses Message selector to receive the message which has correlation id equals to its client id.
Update about confidentiality.
Following info extracted from this link useful for you to understand JMS security.
JMS does not specify a security
contract or an API for controlling
message confidentiality and integrity.
Security is considered to be a
JMS-provider-specific feature. It is
controlled by a System Administrator
rather than implemented
programmatically or by the J2EE server
runtime.
Two major features of JMS security are Authentication and Authorization. According to my knowledge, JMS security for client access is focusing on protecting the JMS destinations (not the individual messages). As long as a client has access to a destination, the security role assigned to the client is applicable for all the messages belongs to the destination.
Based on this,
Solution 1: If the client code is controlled by a trusted party.
Follow my solutions in my original answer.
This will make sure the message is delivered to the right person. But will not protect anything if the client code is purposely modified to receive all messages.
Solution 2: Assign private destination and user account to each client and configure security such that user account of a client can access only its destination.
Note: Found a link about "Restrictions for message selectors to provide message level authorization". But I think it is a vendor specific custom feature.
Hope this will be helpful.

Resources