TCP flow control on Indy TCP Server - windows

The Indy TIdTCPServer component has an OnExecute event where you can process incoming data. My application involves streaming data that is processed before going to a printer, so I'm dependent on the output device being ready. What I want to do is let the TCP flow control manage the input stream in the event of the output stream being busy.
What I don't know is how to best handle this situation. The Indy documentation is a little light on usage examples, any guidance appreciated!

You don't need to deal with TCP/IP flow control manually. Simply do not read any new input data in your OnExecute code if the the device is not ready, that is all you have to do. The data will sit in the socket's receive buffer until Indy reads it into its own buffer, where it will then sit unil you read it ino your own code. If the socket's receive buffer fills up, TCP/IP will automatically notify the other party to stop sending data until the buffer frees up some space.

not sure to what grade you already develped your own code.
If your are still a beginner you might find the demo samples from http://sourceforge.net/projects/indy10clieservr/ helpful as a starting point.

Related

Detecting socket connection using ZeroMQ STREAM sockets

I am building a new application that receives data from a number of external devices and needs to make it available to a number of different components. ZeroMQ seems purpose-built for the "data bus" aspect of my architecture.
I recently became aware that zmq STREAM sockets can connect to native TCP sockets and send/received messages. Using zmq throughout has a lot of appeal, but I have one problem that I don't know how to get around.
One of my devices needs to be set up. That is, I connect a socket to it, send it some configuration information, then sit back and wait for it to send me data. The device also has a "reset" capability (useful in some contexts), that requires re-sending the configuration information. Doing this depends upon having visibility to the setup/tear-down stage of the socket interface. I need to know when a new connection is established, so I can send the necessary configuration messages.
It seems that zmq is purposely designed to shield me from that knowledge. Is there a way to do what I want? Or should I just use regular sockets for this interface?
Well, it turns out that reading (the right version of) the fine manual can be instructive.
When a connection is made, a zero-length message will be received by the application. Similarly, when the peer disconnects (or the connection is lost), a zero-length message will be received by the application.
I guess all that remains is to disambiguate between connect and disconnect. Still looking for advice from the community, if others have dealt with this situation before.
Following up on your own answer, I would hesitate to rely on that zero length connect/disconnect message as your whole strategy - that seems needlessly fragile. It's not clear to me from your question which end is persistent and which end needs configuration information, but I expect that one end knows it's resetting and reconnecting, and that end needs configuration information from the peer, so it should ask for it with a message when it needs it, to which the peer responds with the requested information.
If the peer does not yet have the required configuration information before it receives some other message, it could either queue up that work or it could respond back with the need for the config, and then have the rest of the network handle that need appropriately.
You shouldn't need stream/tcp sockets to make that work, it should work with more standard ZMQ socket types, you just need to build the robustness into your application rather than trying to get it for free from TCP/socket actions.
If I've missed your point, and what I'm suggesting won't work for some reason, you will have to give more specific information about your network topology for anyone else to understand what a suitable solution might be.

Should I create a new thread for RTSP client or just use custom IMFMediaSource in Media Foundation

I'm writing an RTSP client and using Media Foundation to stream multiple IP camera video feeds to Windows display. I understand that the built-in MF RTSP doesn't handle IP cameras very well so I'll have to write a custom Media Source:
Writing a Custom Media Source: https://msdn.microsoft.com/en-us/library/windows/desktop/ms700134(v=vs.85).aspx
Also the following post gives some useful tips but not much implementation details:
Capture H264/AAC stream via RTSP using Media Foundation:
https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/8f67d241-7e72-4509-b5f8-e2ba3d1a33ad/capture-h264aac-stream-via-rtsp-using-media-foundation?forum=mediafoundationdevelopment
If I write my RTSP code within my custom Media Source object, will it be able to run sufficiently in it's own thread and use the blocking "recv" network call to receive the camera stream data? Or is the COM object not really a separate thread that can handle this type of task? Is there a potential conflict between blocking on the "recv" call and blocking on COM's work queue?
Or should I just create a new thread using "CreateThread" that will handle all the RTSP details and forward the camera stream data to the Media Source object?
Any advice to point me in the right direction would be great!
Implement your Media Source and inside the implementation:
CreateThread when your Media source is "started": https://msdn.microsoft.com/en-us/library/windows/desktop/ms700134(v=vs.85).aspx#starting
Use the blocking recv (or you can implement something more complex like IOCP threads) inside your thread from point 1.
Queue each RTSP frame obtained by recv
Deliver the corresponding frame (top of the queue) when a new sample is requested: https://msdn.microsoft.com/en-us/library/windows/desktop/ms700134(v=vs.85).aspx#source_data
You can also introduce GAP if needed or repeat the last sample if not enough data is obtained.
Media Foundation, by its design, suggests that you implement processing asynchronously. There are work queues, event generators, start/stop and other operations are expected to be initiated and without blocking completed asynchronously with a notification/event.
Following this design you don't need a lot of threads. Media Foundation suggests that instead you use its work queues, which implement thread pools as needed.
However it does not mean you can't use threads. You will have to implement asynchronous patterns when you are implementing interfaces/methods mandatory for a Media Foundation source, but it is okay if you prefer to do actual work on a separate worker thread (which in many cases results in simpler code).

ZeroMQ - Multiple Publishers and Listener

I'm just starting understanding and trying ZeroMQ.
It's not clear to me how could I have a two way communication between more than two actors (publisher and subscriber) so that each component is able both to read and write on the MQ.
This would allow to create event-driven architecture, because each component could be listening for an event and reply with another event.
Is there a way to do this with ZeroMQ directly or I should implement my own solution on top of that?
If you want simple two-way communication then you simply set up a publishing socket on each node, and let each connect to the other.
In an many to many setup this quickly becomes tricky to handle. Basically, it sounds like you want some kind of central node that all nodes can "connect" to, receive messages from and, if some conditions on the subscriber are met, send messages to.
Since ZeroMq is a simple "power-socket", and not a message queue (hence its name, ZeroMQ - Zero Message Queue) this is not feasible out-of-the-box.
A simple alternative could be to let each node set up an UDP broadcast socket (not using ZeroMq, just regular sockets). All nodes can listen in to whatever takes place and "publish" its own messages back on the socket, effectively sending it to any nodes listening. This setup works on a LAN and in a setting where it is ok for messages to get lost (like periodical state updates). If the messages needs to be reliable (and possibly durable) you need a more advanced full-blown message queue.
If you can do without durable message queues, you can create a solution based on a central node, a central message handler, to which all nodes can subscribe to and send data to. Basically, create a "server" with one REP (Response) socket (for incoming data) and one PUB (Publisher) socket (for outgoing data). Each client then publishes data to the servers REP socket over a REQ (Request) socket and sets up a SUB (Subscriber) socket to the servers PUB socket.
Check out the ZeroMq guide regarding the various message patterns available.
To spice it up a bit, you could add event "topics", including server side filtering, by splitting up the outgoing messages (on the servers PUB socket) into two message parts (see multi-part messages) where the first part specifies the "topic" and the second part contains the payload (e.g. temp|46.2, speed|134). This way, each client can register its interest in any topic (or all) and let the server filter out only matching messages. See this example for details.
Basically, ZeroMq is "just" an abstraction over regular sockets, providing a couple of messaging patterns to build your solution on top of. However, it relieves you of a lot of tedious work and provides scalability and performance out of the ordinary. It takes some getting used to though. Check out the ZeroMq Guide for more details.

Ruby: How do I receive and send data parallel?

I want to create an application that sends and receives data parallel, like a chat application. It gets input and also sends some output, but NOT only if it receives data. I want to use UDP as protocol. I'm using ruby 1.9.3.
here's the code that receives data:
#s = UDPSocket.new
#s.bind(localhost, 1234)
Socket.udp_server_loop_on([#s]) do |message, sender|
#do something
end
This code should run independent from the rest of the application, it shouldn't block it.
Should I use a thread? I've never tried a network program and I'm not a professional developer, so please be patient. Perhaps my code/design is just crap, so feel free to tell me how this is done by professionals! ;)
UDP lends itself this sort of non-blocking processing quite naturally since you're receiving individual, atomic messages over your socket and can reply in the same fashion.
Inside that loop, just be sure to process things quickly and send response messages. If you make long blocking calls it will hold up your loop and affect response times.
EventMachine provides a structure for writing asynchronous applications and has its own methods for handling UDP and TCP sockets.
Don't forget to look at solutions which are already implemented. For chat applications, Socket.IO is a great place to start.
You should take a look at Eventmachine gem which handles blocking IO very efficiently.
Among others it also offers TCP and UDP server/client API.

Shut down ZeroMQ receiving end without loss

I am developing a (Python/pyzmq)) ZeroMQ server that receives incoming messaging through a PULL socket.
Now, there will be times when I will make a clean restart of the server to upgrade it. My question is; Can I somehow stop receiving incoming messages (on my PULL socket) so that a restart does not loose any messages? I am thinking of something like calling close() no the socket, and then recv()ing the last message. Possibly setting high water mark to zero would yield a similar result.
If none of the above solutions works, I might be better off converting my socket to a REP socket and fetch each message on by one, ACK:ing them every time. Since this would be synchronous, I guess this would be slower.
Yes, 0mq wont offer such type of reliable delivery itself. You should use scheme with ACK's for sure.
See Chapter Four - Reliable Request-Reply of zguide.
I'm using clrzmq with ZMQ 3.2.2 and I got the above functionality by setting the following properties on the PULL socket:
Setting the receive high watermark on the pull socket to the number of msgs I'm willing to keep in memory.
Setting the buffer size to the appropriate size.
When I no longer wish to receive messages I call socket.disconnect() on the receiving channel.
After disconnect, the channel will no longer get new msgs. If you set the high watermark on the sender side it will start keeping the msgs in the sender queue (So that events will not get lost)
When channel is disconnected, calling receive events will succeed while there are msgs in the receiver queue. I'm using receive with a timeout so if it fails after the timeout, when the channel is disconnected, I'm assuming the queue is empty and I can dispose the channel and restart the service.
When the service is back up all msgs stored in the sender queue will get dispatched.

Resources