I have the following setup:
zmq::proxy( acceptor, clients, nullptr );
My acceptor is a zmq::socket_type::stream andmy clients is a zmq::socket::type::dealer.
I am finding when the other end sends a large request (~ 16 [kB]), the request gets broken up and distributed to my dealer threads in pieces. One dealer gets the head of the message, others get pieces in the middle. I am not setting any special options so it seems like this is default zeromq behaviour.
I am using ZeroMQ 4.2.2.
Is there any way to override this behaviour and guarantee a delivery of a complete messages to my dealer threads?
#namdam deserved [+1] for posting version details
It there any wasy to override this... ?
Yes, kindly follow the API documented rules
A socket of type ZMQ_STREAM is used to send and receive TCP data from a non-ØMQ peer, when using the tcp:// transport. A ZMQ_STREAM socket can act as client and/or server, sending and/or receiving TCP data asynchronously.
Compatible peer sockets . . . . . none.
So either way, compose the proxy to handle compatible socket-archetypes ( w/o trying to hardwire ZMQ_STREAM into any other ZeroMQ native socket-archetype ) i.e. avoid using ZMQ_STREAM at all, or create a reading gateway, that decodes and mediates the ZMQ_STREAM compatible behaviour on one side and interfaces to other ZeroMQ native socket-archtypes on the other side of the gateway-logic.
If in doubts, you may enjoy a read into the main conceptual differencessketched in brief in the [ ZeroMQ hierarchy in less than a five seconds ] Section.
Related
Is this bidirectional stream native to http2? I looked at various http2 client. I couldn't find any example where it allows the client and server to establish a single connection and continuously push messages from both side.
(For http2 maybe on a lower level, the communications between client/server just had one tcp connection and all the request/responses are multiplexed in it, but from application level can't find any example where you establish a single connection object, and that connection object can be reused to push messages to each other).
So how did grpc achieve "Bidirectional streaming RPCs"? Specifically in this document
https://grpc.io/docs/what-is-grpc/core-concepts/
It indicates that the server side could define a Bidirectional streaming RPC, and it allows both the client and server side to continuously push messages, and achieve features that is websocket like.
Yes, bidirectional streaming is native to HTTP/2. You can read RFC-7540 for the details of how the protocol works, but basically it allows you to create several streams on a single TCP connection, and each stream can send data in either direction independently of each other.
I'm not familiar with all of the HTTP/2 libraries out there, but I know that nghttp2 will allow this in C++, and I think Java and Go have HTTP/2 implementations in their standard libraries.
I'm building a distributed system and I would like asynchronous send and recv from both sides with blocking after high water mark.
PUSH/PULL sockets works great, but I wasn't able to bind a PUSH socket. Meaning I can't have a client-PUSH to server-PULL and a server-PUSH to client-PULL, if the client is behind a firewall, since the server can't connect to the client.
In the book, the following is written, but I can't find an example of it.
"REQ to DEALER: you could in theory do this, but it would break if you added a second REQ because DEALER has no way of sending a reply to the original peer. Thus the REQ socket would get confused, and/or return messages meant for another client." http://zguide.zeromq.org/php:chapter3
I only need a one-to-one connection, so this would in theory work for me.
My question is, what is the best practice to obtain asynchronous send and recv with ZeroMQ without dropping packets?
Most ZeroMQ sockets can both bind (listen on a specific port, acting as a server) and connect (acting as a client). It is usually not related to the data flow. See the guide for more info.
Try to bind on your servers PUSH socket and connect from your clients PULL socket.
Right now I'm using socket.io with mandatory websockets as the transport. I'm thinking about moving to raw websockets but I'm not clear on what functionality I will lose moving off of socket.io. Thanks for any guidance.
The socket.io library adds the following features beyond standard webSockets:
Automatic selection of long polling vs. webSocket if the browser does not support webSockets or if the network path has a proxy/firewall that blocks webSockets.
Automatic client reconnection if the connection goes down (even if the server restarts).
Automatic detection of a dead connection (by using regular pings to detect a non-functioning connection)
Message passing scheme with automatic conversion to/from JSON.
The server-side concept of rooms where it's easy to communicate with a group of connected users.
The notion of connecting to a namespace on the server rather than just connecting to the server. This can be used for a variety of different capabilities, but I use it to tell the server what types of information I want to subscribe to. It's like connection to a particular channel.
Server-side data structures that automatically keep track of all connected clients so you can enumerate them at any time.
Middleware architecture built-in to the socket.io library that can be used to implement things like authentication with access to cookies from the original connection.
Automatic storage of the cookies and other headers present on the connection when it was first connected (very useful for identifying what user is connected).
Server-side broadcast capabilities to send a common message to either to all connected clients, all clients in a room or all clients in a namespace.
Tagging of every message with a message name and routing of message names into an eventEmitter so you listen for incoming messages by listening on an eventEmitter for the desired message name.
The ability for either client or server to send a message and then wait for a response to that specific message (a reply feature or request/response model).
I was working with different patterns in zeromq in my project and right now i am using req/rep(later will shift to dealer/router) and pub/sub . The client sends messages to the server and the server publishes this information to other clients who have subscribed.
To use multiple sockets i followed the suggestions on this thread
Combining pub/sub with req/rep in zeromq and used zmq_poll . My server polls on req socket and pub socket.
While writing the code and while reading the above post i guessed that my pub socket will never get polledin and that's what i am observing now when i run the program. Only my request is polled in and publish is not happening at all.
If i don't use polling it works fine i.e as soon as the server gets the message i publish it.
So i am unclear on how polling will be useful in this pattern and how i can use it ?
You probably don't need to poll the pub socket. You certainly don't need to poll in on it - because that can never be triggered (pub sockets are send only).
The polling pattern might be useful in the case where you want to poll for "ready to send" on the req and the pub socket, allowing you to multiplex those channels. This will be particularly useful if/when you move to using a dealer/router.
The reason for that is that replacing req with a dealer (e.g.) can allow you to send multiple messages before receiving responses. Polling for inward and outbound messages will allow you to make maximum advantage of that.
How can a client both subscribe and listen to replies with zeromq?
That is, on the client side I'd like to run a loop which only receives messages and selectively sends requests, and on the server side I'd like to publish most of the time, but to sometimes receive requests as well.
It looks like I'll have to have two different sockets - one for each mode of communication. Is it possible to avoid that and on the server side receive "request notifications" from the socket on a zeromq callback thread while pushing messages to the socket in my own thread?
I am awfully new to ZeroMQ, so I'm not sure if what you want is considered best-practice or not. However, a solution using multiple sockets is pretty simple using zmq_poll.
The basic idea would be to have both client and server:
open a socket for pub/sub
open a socket for req/rep
multiplex sends and receives between the two sockets in a loop using zmq_poll in an infinite loop
process req/rep and pub/sub events within the loop as they occur
Using zmq_poll in this manner with multiple sockets is nice because it avoids threads altogether. The 0MQ guide has a good example here. Note that in that example, they use a timeout of -1 in zmq_poll, which causes it to block until at least one event occurs on any of the multiplexed sockets, but it's pretty common to use a timeout of x milliseconds or something if your loop needs to do some other work as well.
You can use 2 threads to handle the different sockets. The challenge is that if you need to share data between threads, you need to synchronize it in a safe way.
The alternative is to use the ZeroMQ Poller to select the sockets that have new data on them. The process would then use a single loop in the way bjlaub explained.
This could be accomplished using a variation/subset of the Majordomo Protocol. Here's the idea:
Your server will be a router socket, and your clients will be dealer sockets. Upon connecting to the server, the client needs to send some kind of subscription or "hello" message (of your design). The server receives that packet, but (being a router socket) also receives the ID of that client. When the server needs to send something to that client (through your design), it sends it to that ID. The client can send and receive at will, since it is a dealer socket.