I need to emulate a radio communication network composed of N nodes, with these properties:
nodes either send then receive data, or receive then send data, but not at the same time.
data sent over-the-air are received by all nodes which are in receive mode at that time.
if two or more nodes sends data simultaneously, data are lost.
there is no time synchronization among nodes.
In Go, if I use a channel to emulate the transmission media, data are serialized, and only one receiver gets the data, not all of them.
Also, I cannot think of a way to "ruin" the data if two sender try to send at the same time. Whether I use a mutex or not, one of the sender will successfully get its message sent.
Why don't you create a publisher and subscriber module using Golang Channels.
Create a centralized queuing system where all your sender and receiver nodes register themself. If any node sends the data it will go to that module and from the sender's list, it picks the senders channel and starts writing on it. The same applies to receivers also.
You have to create one channel per node and register it to a central pub/sub module. This will definitely solve your problem.
Related
TL;DR I want to have the functionality where a channel has two extra fields that tell the producer whether it is allowed to send to the channel and if so tell the producer what value the consumer expects. Although I know how to do it with shared memory, I believe that this approach goes against Go's ideology of "Do not communicate by sharing memory; instead, share memory by communicating."
Context:
I wish to have a server S that runs (besides others) three goroutines:
Listener that just receives UDP packets and sends them to the demultplexer.
Demultiplexer that takes network packets and based on some data sends it into one of several channels
Processing task which listens to one specific channel and processes data received on that channel.
To check whether some devices on the network are still alive, the processing task will periodically send out nonces over the network and then wait for k seconds. In those k seconds, other participants of my protocol that received the nonce will send a reply containing (besides other information) the nonce. The demultiplexer will receive the packets from the listener, parse them and send them to the processing_channel. After the k seconds elapsed, the processing task processes the messages pushed onto the processing_channel by the demultiplexer.
I want the demultiplexer to not just blindly send any response (of the correct type) it received onto the the processing_channel, but to instead check whether the processing task is currently even expecting any messages and if so which nonce value it expects. I made this design decision in order to drop unwanted packets a soon as possible.
My approach:
In other languages, I would have a class with the following fields (in pseudocode):
class ActivatedChannel{
boolean flag_expecting_nonce;
int expected_nonce;
LinkedList chan;
}
The demultiplexer would then upon receiving a packet of the correct type simply acquire the lock for the ActivatedChannel processing_channel object, check whether the flag is set and the nonce matches, and if so add the message to the LinkedList chan!
Problem:
This approach makes use of locks and shared memory, which does not align with Golang's "Do not communicate by sharing memory; instead, share memory by communicating" mantra. Hence, I would like to know... :
... whether my approach is "bad" regarding Go in the sense that it relies on shared memory.
... how to achieve the outlined result in a more Go-like way.
Yes, the approach described by you doesn't align with Golang's Idiomatic way of implementation. And you have rightly pointed out that in the above approach you are communicating by sharing memory.
To achieve this in Go's Idiomatic way, one of the approaches could be that your Demultiplexer "remembers" all the processing_channels that are expecting nonce and the corresponding type of the nonce. Whenever a processing_channels is ready to receive a reply, it sends a signal to the Demultiplexe saying that it is expecting a reply.
Since Demultiplexer is at the center of all the communication it can maintain a mapping between a processing_channel & the corresponding nonce it expects. It can also maintain a "registry" of all the processing_channels which are expecting a reply.
In this approach, we are Sharing memory by communicating
For communicating that a processing_channel is expecting a reply, the following struct can be used:
type ChannelState struct {
ChannelId string // unique identifier for processing channel
IsExpectingNonce bool
ExpectedNonce int
}
In this approach, there is no lock used.
When using Ably for Pub/Sub over WebSockets, can I use wildcards to subscribe to multiple channels like so
var channel = ably.channels.get('foo:*')
channel.attach()
(disclaimer: I am a developer advocate for Ably, and posting and self-answering a commonly asked support question here on Stack Overflow so our users can find this more easily)
When attaching to a channel, you need to explicitly provide the channel name you are attaching to such as:
var channel = ably.channels.get('announcements')
channel.attach()
Attaching to more than one channel in a single operation is not possible, i.e. the following is not supported:
var channel = ably.channels.get('foo:*')
channel.attach()
/* attempting to attach to all channels matching the name foo:* will not work */
This is not possible for a number of reasons:
Attaching to unbounded number of channels in a single operation will not scale for your client devices or our servers terminating those connections
Channels in Ably's cluster are dynamically distributed across the available resources and move frequently. Each channel is largely autonomous and this is important to ensure the system remains reliable without a single point of failure or congestion. If a client were to subscribe to all channels matching a wildcard, then a connection would need to be maintained to every server in the cluster that could possibly be running a channel in case a channel is created that matches that wildcard. This does not scale.
If you are subscribed to wildcard channels then it is impossible to offer the data delivery guarantees and quality of service Ably provides on channels because:
At no point in time is there a way to deterministically know which channels a client is actually attached to
If the client device becomes overloaded (can't keep up with the stream) or exceeds rate limits, Ably's servers will have to selectively start dropping messages across random channels to ensure the client can continue to receive messages. Which messages should be dropped? How does a customer then work out which messages he/she missed?
However, because Ably's connections are multiplexed thus allowing you to attach and detach from any channels dynamically over the same connection, it is of course possible to effectively subscribe to wildcard channels by attaching to channels as you need them.
I have implemented the example from the RabbitMQ website:
RabbitMQ Example
I have expanded it to have an application with a button to send a message.
Now I started two consumer on two different computers.
When I send the message the first message is sent to computer1, then the second message is sent to computer2, the thrid to computer1 and so on.
Why is this, and how can I change the behavior to send each message to each consumer?
Why is this
As noted by Yazan, messages are consumed from a single queue in a round-robin manner. The behavior your are seeing is by design, making it easy to scale up the number of consumers for a given queue.
how can I change the behavior to send each message to each consumer?
To have each consumer receive the same message, you need to create a queue for each consumer and deliver the same message to each queue.
The easiest way to do this is to use a fanout exchange. This will send every message to every queue that is bound to the exchange, completely ignoring the routing key.
If you need more control over the routing, you can use a topic or direct exchange and manage the routing keys.
Whatever type of exchange you choose, though, you will need to have a queue per consumer and have each message routed to each queue.
you can't it's controlled by the server check Round-robin dispatching section
It decides which consumer turn is. i'm not sure if there is a set of algorithms you can pick from, but at the end server will control this (i think round robin algorithm is default)
unless you want to use routing keys and exchanges
I would see this more as a design question. Ideally, producers should create the exchanges and the consumers create the queues and each consumer can create its own queue and hook it up to an exchange. This makes sure every consumer gets its message with its private queue.
What youre doing is essentially 'worker queues' model which is used to distribute tasks among worker nodes. Since each task needs to be performed only once, the message is sent to only one node. If you want to send a message to all the nodes, you need a different model called 'pub-sub' where each message is broadcasted to all the subscribers. The following link shows a simple pub-sub tutorial
https://www.rabbitmq.com/tutorials/tutorial-three-python.html
I can make some senders to send some messages, but when I create two receivers in one Session, the first one works and the second one blocked. In debug, I see the queue list size which the second receiver received is zero. I found that session is made for one thread, I don't know whether this problem involves in the unsafe thread?
I use ActiveMQ implementation.
A JMS Session is absolutely single threaded. As such, it can only have one active receiver. You have 2 options:
Use one connection with multiple sessions, each session having a receiver. Connections are thread safe and you can create many sessions from that single connections.
ActiveMQ gives you a number of options regarding multiplexing multiple destinations, so rather than having multiple receivers, you might want to focus on one, but use ActiveMQ's facilities to create virtual destinations that will funnel all the messages you want through the one receiver.
See this question.
I want to share a data with multiple processes. My first attempt is to use Point to point message queue with multiple readers since I read that P2P Msg Queue is very fast.
During my test, it seems like multiple readers are reading from the same queue and once a message is fetched by one reader, other readers will not be able to fetch the same message.
What is a better IPC for sharing data to multiple processes?
The data is updated frequently (multiple times per second) so I think WM_COPYDATA is not a good choice and will interfere with the "normal" message queue.
My second attempt will probably be a shared memory + mutex + events
Point-to-point queues will work fine. Yes, when you send, only one receiver will get the message but the sender can query the queue (by calling GetMsgQueueInfo) to see how many listeners (wNumReaders member of the MSGQUEUEINFO) there are and simply repeat the message that number of times.
Finally, it's perfectly valid for more than one thread or process to open the same queue for read access or for write access. Point-to-point message queues support multiple readers and multiple writers. This practice allows, for example, one writer process to send messages to multiple client processes or multiple writer processes to send messages to a single reader process. There is, however, no way to address a message to a specific reader process. When a process, or a thread, reads the queue, it will read the next available message. There is also no way to broadcast a message to multiple readers.
Programming Windows Embedded CE 6.0 Developer Reference, Fourth Edition, Douglas Boiling, Page 304
Despite the warning, ctacke's ide seems to be fine for my use cases.
Caveat:
My queue readers need to Sleep(10) after they fetch their share of message to allow other readers to go and fetch messages. Without Sleep(), only one reader process is signaled from waiting.