How do I add a pipeline to a REQ-REP in ZeroMQ? - zeromq

I am experimenting with ZeroMQ where I want to create a server that does :
REQ-PIPELINE-REPLY
I want to sequentially receives data query requests, push it through a inproc pipeline to parallelise the data query and the sink merges the data back. After the sink merges the data together, the sink sends the merged data as the reply back to the request.
Is this possible? How would it look? I am not sure if the push/pull will preserve client's address for the REP socket to send back to.

Assuming that each client has only a single request out at any one time.
Is this possible?
Yes, but with different socket types.
How would it look?
(in C)
What you may like to do is shift from a ZMQ_REP socket on the external server socket to a ZMQ_ROUTER socket. The Router/Dealer sockets have identities which can allow you to have multiple requests in your pipeline and still respond correctly to each.
The Asynchronous Client/Server Pattern:
http://zguide.zeromq.org/php:chapter3#The-Asynchronous-Client-Server-Pattern
The only hitch in this is that you will need to manage the multiple parts of the ZMQ message. The first part is the identity. Second is null. Third is the data. As long as you REPLY in the same order as the REQUEST the identity will guide your response's data to the correct client. I wrapped my requests in a struct:
struct msg {
zmq_msg * identity;
zmq_msg * nullMsg;
zmq_msg * data;
};
Make sure to use zmq_msg_more when receiving messages and set the more flag when sending correctly.
I am not sure if the push/pull will preserve client's address for the
REP socket to send back to.
You are correct. A push pull pattern would not allow for specifying of the return address between multiple clients.

Related

How to create two udp sockets where one is sending requests and another one receiving the answers?

I'm looking for a proper way to have one goroutine sending out request packets to specific servers while a second goroutine receiving the responses and handling them, maybe even create a new goroutine for each response to handle.
The architecture of the game is that there are multiple masterservers, which can be asked for ip lists of registered servers.
After getting the ips and ports from the masterservers, each of the ips gets a request for its data, like server name, map, players, etc.
Also, are there better ways to handle this?
Currently I am creating a goroutine per request that also waits for a response afterwards.
The waiting for a response timeouts after 35ms and continues to send 1.2 times the previous amount of request packets to have a small burst of requests. Also the timeout is doubled on every retry.
I'd like to know if there are better strategies that have proven to be more robust and have a lower latency, that are not too complex.
Edit:
I only create the client side sockets, but would have, if there is no better approach, a client that sends UDP request packets that contain a different socket's address as sender value in order to receive the answers on a different socket that acts kind of like a server, where all the response packets are collected. In order to separate the sending socket from the receiving socket.
This question is tagged as client-server as one of the sockets is supposed to act like a server, even tho all it does is receive expected answers in response to request packets sent by the client socket.

Possible to access TCP packet details with a go HTTP client?

I have a need to be able to validate TOS/DSCP marks on response data from a set of HTTP servers. Would it be possible, given a list of target URLs to test, if there is a way in go to generate the HTTP request, and then be able to examine the response's TCP packet details in order to obtain the TOS value?
My assumption at this point is that it may require creating a socket, and then dynamically generating a TCP packet that contains the HTTP request payload. I've been searching around to see if there were any libraries that would aid in this task, but haven't found anything specific yet.
Note: a simple TCP connection will not provide enough data - the target servers in question will alter TOS/DSCP marks dynamically based on the HTTP server name (so essentially, a single physical server will respond with different TOS marks depending on the vHost requested), so it is important to be able to verify the TOS on actual HTTP response packets, and not something simple like a ping. The TOS values in the TCP 3-way handshake cannot be trusted either - it must be a packet containing the HTTP data.
I did end up solving this problem using gopacket/pcap and net/http.
In a nutshell, what I ended up doing is writing a function that creates a channel, and then calls a goroutine that does the actual packet capture and parsing. The goroutine passes the captured TOS value back to the channel, and then the original function does the http request, and then reads the channel to get the TOS result. Still a bit of a work-in-progress, but so far, this solution seems to be working fairly well.

ZeroMQ pipeline with return to initial client

The gist of what I'm trying to accomplish is to have a fan-out type of processing that will return a result to the initial client.
Right now, it is set up as:
[REQ]-->[ROUTER|PUB]-->[SUB|PUSH]-->[PULL|???]
I have it set up as PUB-SUB as the idea is that each SUB node will process a different part of a given manifest. For certain manifests, all SUB nodes are hit. For other manifests, maybe only a subset of the SUB nodes are hit. Using the SUB allows me to implement it without creating a discrete decision point on which nodes to route to.
I've got it to the point where I'm more or less able to bring the results together, but I have no idea how I'm supposed to return a result to the initial caller on the REQ without the caller binding a new socket at the client and then connecting to the socket. Mistakenly, I figured that if I could get the address of the caller at the ROUTER, I could pass that info along and send a message back to the initial REQ.
It seems that it should be possible and what I'm missing is perhaps some device coupled to the ROUTER?
So is it possible to accomplish this and is there a better pattern for this without binding another socket at the caller?
The initial caller REQ expects its reply from ROUTER and cannot accept messages from anywhere else. Therefore, a simple approach would be a broker with three endpoints:
ROUTERfor communication with client
PUB for sending message to all workers
PULL for getting in the results
Routing within the broker would be:
ROUTER -> PUB
PULL -> aggregate_by_client_id() -> ROUTER
The from my point of view tricky part is hidden in aggregate_by_client_id(), which is necessary since you can send only answer to REQ. Do you know how many results from workers to expect?

RIght ZeroMQ topology

I need to write an Order Manager that routes client (stock, FX, whatever) orders to the proper exchange. The clients want to send orders, but know nothing about FIX or other proprietary protocols, only an internal (normalized) format for sending orders. I have applications (servers) that each connect through FIX/Binary/etc connections to each FIX/etc provider. I would like a broker program in between the clients and the servers that take the normalized order and turn it into a proper format to a given FIX/etc provider, and take messages from the servers and turn it back to a normalized format for the clients. It is ok for the clients to specify a route, but it is up to a broker program in between the clients and the servers to communicate messages about that order back and forth between clients and servers. So somehow the output [fills, partial fills, errors, etc] from the server has to be routed back to the right client.
I have studied the ZMQ topologies, and REQ->ROUTER->DEALER doesn't work [the code works - I mean it is the wrong topology] since the servers are not identical.
//This topology doesn't work because the servers are not identical
#include "zhelpers.hpp"
int main (int argc, char *argv[])
{
// Prepare our context and sockets
zmq::context_t context(1);
zmq::socket_t frontend (context, ZMQ_ROUTER);
zmq::socket_t backend (context, ZMQ_DEALER); // ZMQ_ROUTER here? Can't get it to work
frontend.bind("tcp://*:5559");
backend.bind("tcp://*:5560");
// Start built-in device
zmq::device (ZMQ_QUEUE, frontend, backend);
return 0;
}
I thought that maybe a ROUTER->ROUTER topology instead is correct, but I can't get the code to work - the clients send orders but never get responses back so I must be doing something wrong. I thought that using ZMQ_IDENTITY is the correct thing to do, but not only can I also not get this to work, but it seems as if ZMQ is moving away from ZMQ_IDENTITY?
Can someone give a simple example of three ZMQ programs [not in separate threads, three separate processes] that show the correct way to do this?
Look at the MajorDomo example in the Guide: http://zguide.zeromq.org/page:all#toc71
You'd use a worker pool per exchange.
Responding to:
ROUTER->ROUTER topology instead is correct, but I can't get the code to work
My understanding is that ZMQ Sockets comes in Pairs to enable a certain pattern.
PAIR
REQ/REP
PUB/SUB
PUSH/PULL
Only PAIR socket type can talk to another socket of type PAIR and behaves similar to your normal socket.
For all other socket types, there is a complimentary socket type for communication. For example REQ socket type can only talk to REP socket type. REQ Socket type can not talk to REQ socket type.
My understanding is that in ROUTER/DEALER, ROUTER can talk to DEALER but ROUTER can not talk to ROUTER socket type.
My understanding could be wrong but from the examples this is what I have understood so far.

Http request response debugging

I have two phones connected to a Wifi access point, both have ip in the private range.
One of the phones has a HTTP server running on it and the other phone acts like a client. The client sends GET requests data to the server as name/
value pairs in the URL query string. At the moment the server is only sending on HTTP.OK on receiving the query string.
What is happening is the client may not be stationary and maybe moving around so it may not be possible for it to be in range always of the Wifi access
point due to that I am not getting all the data sent from the client at the server end.
I want to ensure that all data sent is actually received by the server.
What kind of error correction should I implement? Can I check for some relevant HTTP error codes or the like?
If the HTTP server doesn't receive the entire query string in a GET request, then the HTTP request cannot possibly be valid as these parameters are on the first line of the request.
The server will be unable to handle the request and in this case will likely return status code 400 (Bad Request).
If your client receives this (which seems unlikely that it would fail to transmit the request, yet receive the response), then you'll know to retransmit. In general, the properties of TCP connections like automatic retransmissions, checksums and timeouts should be all you need for successful delivery, or to determine failure.
You need to check for timeouts on the client. That depends on the process/language used.
EDIT: http://wiki.forum.nokia.com/index.php/Using_Http_and_Https_in_Java_ME
Looks like you simply set a timeout and catch IO errors.
Premature optimization.
Connection integrity is already dealt with in the lower parts of the network stack. So if there were any dropouts in the middle of the request (assuming it spanned more than a single packet) the TCP stack would attempt to recover them before passing the data on to the server.
If you need to prove this to yourself, then just add a checksum as the last part of the query.
C.

Resources