Which one is preferable : zmq_send or zmq_msg_send? - zeromq

I am discovering zeroMQ, and I understand that zmq_send sends a buffer and zmq_msg_send sends a zmq_msg_t message.
It seems to me that it is two different ways of doing the same thing (both can send multi-part messages, etc).
What are the advantages of using zmq_msg_t structs?

Advantage is simply that your code works on a bit lower-level, closer to the metal, and saves a few CPU-cycles, that .zmq_send() wrapper spends on preparing the zmq_msg_t struct and passing it forward to the ZMQ-internal messaging processing as the .zmq_msg_send() does in one step.

Related

How can use queueinglib with cPacket instead of cMessage?

I was using a very old queueinglib (maybe from ten years ago). In which Job is inhereted from cPacket, not cMessage.
Now I changed my IDE version from 5 to 6 and had to update queueinglib. When I do I am very suprised to see that Job is now inhereted from cMessage.
In my model, I have both internal and external messages(through datarate channel). For internal messages, it is okay to use cMessage but I need to use cPacket for external messages. Thats why my own message type was derived from cPacket.
Now I have messages derived from cPacket, but queueinglib blocks cannot cast them to Job. How can I solve this problem? Here are some ideas that I can think of:
-I can change queueinglib entirely but I don't want to do this to an external library. I believe it is using cMessage instead of cPacket for a reason.
-Multiple inheritence. I can derive my message type from both cMessage and cPacket but I saw in the manual that it is not possible.
-I can create a new message when transmitting between a block of mine and queueinglib. But then message ids will be useless. I will be constructing and destructing messages constantly..
So is there a better, recomended approach?
The reason why Jobs are messages in the queueinglib example is because those packages never travel along datarate channel links so they don't need to be packets and the example is intentionally kept as simple as possible.
BUT: this is an example. It was never meant to be an external library to be consumed by other project, so it was not designed to be extensible. You can safely copy and modify it (I recommend to mark/document your changes in case you want to upgrade/compare the lib in future).
The easiest approach: modify Job.msg and change message Job to packet Job and you are done. And use Job as the base of your messages. As Job extends Message, the whole queuing library will work just fine.
Without modifying the queinglib sources: There are various fields in cMessage that can be (mis)used to carry a pointer. For example you could use setContextPointer() or setContolInfo() to set your packet's pointer (after some casting) whenever it enters the parts where you use the queuinglib, and remove the prointer when leaves. This requires a bit more work (but only in your code) and has it's advantage that the network packets do not contain anything from the queuinlib fields (which is a proper design) as data releated to queuing component are not meant to travel between network nodes (e.g. priority or generation).
Also using a proper external queuing library (like the one in inet's src/inet/queueing folder) would have been the best solution, but that ship has sailed long ago, I believe.
In short, go with the first recommendation.

Is there any way to use IOCP to notify when a socket is readable / writeable?

I'm looking for some way to get a signal on an I/O completion port when a socket becomes readable/writeable (i.e. the next send/recv will complete immediately). Basically I want an overlapped version of WSASelect.
(Yes, I know that for many applications, this is unnecessary, and you can just keep issuing overlapped send calls. But in other applications you want to delay generating the message to send until the last moment possible, as discussed e.g. here. In these cases it's useful to do (a) wait for socket to be writeable, (b) generate the next message, (c) send the next message.)
So far the best solution I've been able to come up with is to spawn a thread just to call select and then PostQueuedCompletionStatus, which is awful and not particularly scalable... is there any better way?
It turns out that this is possible!
Basically the trick is:
Use the WSAIoctl SIO_BASE_HANDLE to peek through any "layered service providers"
Use DeviceIoControl to submit an AFD_POLL request for the base handle, to the AFD driver (this is what select does internally)
There are many, many complications that are probably worth understanding, but at the end of the day the above should just work in practice. This is supposed to be a private API, but libuv uses it, and MS's compatibility policies mean that they will never break libuv, so you're fine. For details, read the thread starting from this message: https://github.com/python-trio/trio/issues/52#issuecomment-424591743
For detecting that a socket is readable, it turns out that there is an undocumented but well-known piece of folklore: you can issue a "zero byte read", i.e., an overlapped WSARecv with a zero-byte receive buffer, and that will not complete until there is some data to be read. This has been recommended for servers that are trying to do simultaneous reads from a large number of mostly-idle sockets, in order to avoid problems with memory usage (apparently IOCP receive buffers get pinned into RAM). An example of this technique can be seen in the libuv source code. They also have an additional refinement, which is that to use this with UDP sockets, they issue a zero-byte receive with MSG_PEEK set. (This is important because without that flag, the zero-byte receive would consume a packet, truncating it to zero bytes.) MSDN claims that you can't combine MSG_PEEK with overlapped I/O, but apparently it works for them...
Of course, that's only half of an answer, because there's still the question of detecting writability.
It's possible that a similar "zero-byte send" trick would work? (Used directly for TCP, and adding the MSG_PARTIAL flag on UDP sockets, to avoid actually sending a zero-byte packet.) Experimentally I've checked that attempting to do a zero-byte send on a non-writable non-blocking TCP socket returns WSAEWOULDBLOCK, so that's a promising sign, but I haven't tried with overlapped I/O. I'll get around to it eventually and update this answer; or alternatively if someone wants to try it first and post their own consolidated answer then I'll probably accept it :-)

NPAPI: data push model?

When working with NPAPI, you have the control over two functions: NPP_WriteReady & NPP_Write. This is basically a data push model.
However I need to implement support for a new file format. The library I am using takes any concrete subclass of the following source model (simplified c++ code):
struct compressed_source {
virtual int read(char *buf, int num_bytes) = 0;
}
This model is trivial to implement when dealing with FILE* (C) or socket (BSD) and other(s), since they comply with a pull data model. However I do not see how to fullfill this pull model from the NPAPI push model.
As far as I understand I cannot explicitely call NPP_Write within my concrete implementation of ::read(char *, size_t).
What is the solution here ?
EDIT:
I did not want to add too much details to avoid confusing answer. Just for reference, I want to be build an OpenJPEG/NPAPI plugin. OpenJPEG is a huge library, and the underlying JPEG 2000 implementation really wants a pull data model to allow fine access on massive image (eg: specific sub-region of an 100000 x 100000 image thanks to low level indexing information). In other word, I really need a pull data model plugin interface.
Preload the file
Well, preloading the whole file is always an option that would work, but often not a good one. From your other questions I gather that files/downloads in question might be rather large, so avoiding network traffic might be a good idea, so preloading the file is not really an option.
Hack the library
If you're using some open source library, you might be able to implement a push API along or instead of the current pull API directly within the library.
Or you could implement things entirely by yourself. IIRC you're trying to decode some image format, and image formats are usually reasonably easy to implement from scratch.
Implement blocking reads by blocking the thread
You could put the image decoding stuff into a new thread, and whenever there is not enough buffered data already to fulfill a read immediately, do a blocking wait for the data receiving thread (main thread in case of NPAPI) until it indicates the buffer is sufficiently filled again. This is essentially the Producer/Consumer problem.
Of course, you'll first need to choose how to use threads and synchronization primitives (a library such as C++11 std::thread, Boost threads, low-level pthreads and/or Windows threads, etc.). There are tons of related SO questions on SO/SE and tons of articles/postings/discussions/tutorials etc. all over the internet.

For what do I need the Socket::MSG_* constants in ruby?

I want to develop a p2p app which communicates via UDPSockets. I'm just starting to read the docs for that and I couldn't understand that piece of ruby's socket management.
Specifically it's possible to add those "flags", as ruby-doc calls them, to every send call. (http://www.ruby-doc.org/stdlib-1.9.3/libdoc/socket/rdoc/UDPSocket.html#method-i-send)
But when do I use those and how?
You'll probably know if you need to use them as you'll have an example or some documentation that refers to them.
Some of the more common options used with recvfrom are: MSG_OOB to process out-of-band data, MSG_PEEK to peek at the incoming message without de-queueing it, and MSG_WAITALL to wait for the receive buffer to fill up.
These are really quite edge-case so you probably won't ever see one used.
Those flags come from the low-level recv call on which Socket is based.

Shared memory vs. Go channel communication

One of Go's slogans is Do not communicate by sharing memory; instead, share memory by communicating.
I am wondering whether Go allows two different Go-compiled binaries running on the same machine to communicate with one another (i.e. client-server), and how fast that would be in comparison to boost::interprocess in C++? All the examples I've seen so far only illustrate communication between same-program routines.
A simple Go example (with separate client and sever code) would be much appreciated!
One of the first things I thought of when I read this was Stackless Python. The channels in Go remind me a lot of Stackless Python, but that's likely because (a) I've used it and (b) the language/thoughts that they actually came from I've never touched.
I've never attempted to use channels as IPC, but that's probably because the alternative is likely much safer. Here's some psuedocode:
program1
chan = channel()
ipc = IPCManager(chan, None)
send_to_other_app(ipc.underlying_method)
chan.send("Ahoy!")
program2
chan = channel()
recv_from_other_app(underlying_method)
ipc = IPCManager(chan, underlying_method)
ahoy = chan.recv()
If you use a traditional IPC method, you can have channels at each side that wrap their communication on top of it. This leads to some issues in implementation, which I can't even think about how to tackle, and likely a few unexpected race conditions.
However, I agree; the ability to communicate via processes using the same flexibility of Go channels would be phenomenal (but I fear unstable).
Wrapping a simple socket with channels on each side gets you almost all of the benefits, however.
Rob has said that they are thinking a lot about how to make channels work as a (network) transparent RPC, this doesn't work at the moment, but obviously this is something they want to take the time to get it right.
In the meantime you can use the gob package which, while not a perfect and seamless solution, works quite well already.
I've looked at doing a similar thing for wrapping the MPI library. My current thinking is to use something like
func SendHandler(comm Comm){
// Look for sends to a process
for {
i := <-comm.To;
comm.Send(i,dest);
}
}
func ReceiveHandler(comm Comm){
// Look for recieves from a process
// Ping the handler to read out
for {
_ = <-comm.From;
i := comm.Recv(source);
comm.From <- i;
}
}
where comm.Send and comm.Recv wrap a c communications library. I'm not sure how you do the setting up of a channel for two different programs though, I've no experience in that sort of thing.

Resources