Will BlockingCollection or TPL will be suitable for multiple producer-consumer scenario - task-parallel-library

Will only BlockingCollection or TPL with BlockingCollection will be suitable for this case:
program will gather data from multiple socket streams (will act as client). Get Packet from socket, forward it to the central processor. Central processor will do calculations, reply back to socket threads and will tell GUI thread to update the GUI.
Central processor, socket threads and GUI thread all will need queue. The Data flow is from socket threads to GUI through central processor.
I am not sure of using TPL here. So need some guidance.

Related

Real time transformations from/to pubsub and websocket push to client

I need to get some real time data from a third party provider, transform them and push the to the browser via websockets.
The whole procedure should not take more than 200ms from the time I received the data till the time the browser gets them.
I am thinking in using pub/sub to dataflow to pub/sub again where a websocket server will subscribe and push the messages to the browsers.
Is this approach correct or dataflow is not designed for something like this?
Dataflow is designed for reliable streaming aggregation and analytics and is not designed for guaranteed sub-second latencies through the system. The core primitives like windowing and triggering allow for reliable processing of streams over defined windows of data despite late data and potential machine or pipeline errors. The main use case we have optimized for is for example, aggregating and outputting statistics over a stream of data, outputting reliable statistics for each window while doing logging to disk for fault-tolerance and waiting if necessary before triggering, to accommodate late data. As such, what we have not optimized for is the end-to-end latency you require.

How to handle global resources in Spring State Machine?

I am thinking of using Spring State Machine for a TCP client. The protocol itself is given and based on proprietary TCP messages with message id and length field. The client sets up a TCP connection to the server, sends a message and always waits for the response before sending the next message. In each state, only certain responses are allowed. Multiple clients must run in parallel.
Now I have the following questions related to Spring State machine.
1) During the initial transition from disconnected to connected the client sets up a connection via java.net.Socket. How can I make this socket (or the DataOutputStream and BufferedReader objects got from the socket) available to the actions of the other transitions?
In this sense, the socket would be some kind of global resource of the state machine. The only way I have seen so far would be to put it in the message headers. But this does not look very natural.
2) Which runtime environment do I need for Spring State Machine?
Is a JVM enough or do I need Tomcat?
Is it thread-safe?
Thanks, Wolfgang
There's nothing wrong using event headers but those are not really global resources as header exists only for duration of a event processing. I'd try to add needed objects into an machine's extended state which is then available for all actions.
You need just JVM. On default machine execution is synchronous so there should not be any threading issues. Docs have notes if you want to replace underlying executor asynchronous(this is usually done if multiple concurrent regions are used).

Is polling on a ZMQ router socket threadsafe?

I have multiple threads interacting with the same ZeroMQ router socket (bad idea, I know). I manage thread safety with locks on all sends and receives.
Do I also need to lock polling or is this relatively benign operation threadsafe?
I think using polling alleviates any need for multithreading. You can pick up events as they come in using the poll loop on one thread and then distribute those events if you wish to other threads for processing. This way you don't need to share the socket.

ZeroMQ - Multiple Publishers and Listener

I'm just starting understanding and trying ZeroMQ.
It's not clear to me how could I have a two way communication between more than two actors (publisher and subscriber) so that each component is able both to read and write on the MQ.
This would allow to create event-driven architecture, because each component could be listening for an event and reply with another event.
Is there a way to do this with ZeroMQ directly or I should implement my own solution on top of that?
If you want simple two-way communication then you simply set up a publishing socket on each node, and let each connect to the other.
In an many to many setup this quickly becomes tricky to handle. Basically, it sounds like you want some kind of central node that all nodes can "connect" to, receive messages from and, if some conditions on the subscriber are met, send messages to.
Since ZeroMq is a simple "power-socket", and not a message queue (hence its name, ZeroMQ - Zero Message Queue) this is not feasible out-of-the-box.
A simple alternative could be to let each node set up an UDP broadcast socket (not using ZeroMq, just regular sockets). All nodes can listen in to whatever takes place and "publish" its own messages back on the socket, effectively sending it to any nodes listening. This setup works on a LAN and in a setting where it is ok for messages to get lost (like periodical state updates). If the messages needs to be reliable (and possibly durable) you need a more advanced full-blown message queue.
If you can do without durable message queues, you can create a solution based on a central node, a central message handler, to which all nodes can subscribe to and send data to. Basically, create a "server" with one REP (Response) socket (for incoming data) and one PUB (Publisher) socket (for outgoing data). Each client then publishes data to the servers REP socket over a REQ (Request) socket and sets up a SUB (Subscriber) socket to the servers PUB socket.
Check out the ZeroMq guide regarding the various message patterns available.
To spice it up a bit, you could add event "topics", including server side filtering, by splitting up the outgoing messages (on the servers PUB socket) into two message parts (see multi-part messages) where the first part specifies the "topic" and the second part contains the payload (e.g. temp|46.2, speed|134). This way, each client can register its interest in any topic (or all) and let the server filter out only matching messages. See this example for details.
Basically, ZeroMq is "just" an abstraction over regular sockets, providing a couple of messaging patterns to build your solution on top of. However, it relieves you of a lot of tedious work and provides scalability and performance out of the ordinary. It takes some getting used to though. Check out the ZeroMq Guide for more details.

TCP flow control on Indy TCP Server

The Indy TIdTCPServer component has an OnExecute event where you can process incoming data. My application involves streaming data that is processed before going to a printer, so I'm dependent on the output device being ready. What I want to do is let the TCP flow control manage the input stream in the event of the output stream being busy.
What I don't know is how to best handle this situation. The Indy documentation is a little light on usage examples, any guidance appreciated!
You don't need to deal with TCP/IP flow control manually. Simply do not read any new input data in your OnExecute code if the the device is not ready, that is all you have to do. The data will sit in the socket's receive buffer until Indy reads it into its own buffer, where it will then sit unil you read it ino your own code. If the socket's receive buffer fills up, TCP/IP will automatically notify the other party to stop sending data until the buffer frees up some space.
not sure to what grade you already develped your own code.
If your are still a beginner you might find the demo samples from http://sourceforge.net/projects/indy10clieservr/ helpful as a starting point.

Resources