How better to separate topics using ZeroMQ - just by specifying different ports or by using prefixes like here: ZeroMQ and multiple subscribe filters in Python
It looks simpler to specify ports.
I do not mean to connect to multiple topics. I mean that different parts of application will connect to different ports.
No, using different port numbers will not work the way you feel.
aSubTypeSOCKET.setsockopt( zmq.SUBSCRIBE, aLeftMatchTopicFILTERasSTRING )
aSubTypeSOCKET.setsockopt( zmq.SUBSCRIBE, anotherTopicFILTERasSTRING )
...
aSubTypeSOCKET.setsockopt( zmq.UNSUBSCRIBE, aLeftMatchTopicFILTERasSTRING )
...
aSubTypeSOCKET.setsockopt( zmq.SUBSCRIBE, yetAnotherTopicFILTERasSTRING )
is the standard mechanism how to setup Topic-filter matching condition(s) to effectively work with.
Multiple connect()-s are possible, but
aSubTypeSOCKET.connect( "tcp://123.123.123.123:12345" )
aSubTypeSOCKET.connect( "tcp://123.123.123.123:23456" )
aSubTypeSOCKET.connect( "tcp://123.123.123.123:34567" )
will make aSubTypeSOCKET indeed .connect()-ed to several port-numbers, but the sender side yet decides, if a subscription "mechanics" will deliver a content "specialisation" that you might have expected to use instead of the common Topic-filter(s) subscribed to using .setsockopt().
Next, if multiple .connect()-ed PUB-s deliver messages, the SUB-side is getting these in through an incoming "routing" strategy, by a "Fair-queued" manner, so the SUB-side will have to turn and check the round-robin wheel a complete turn around, just to see, if any present/not-present and any non-empty subscription Topic-filter setup will apply to all PUB-sides symmetrically, similarly the first empty-string Topic-filter will "unblock" / "short-cut" all senders to deliver any messages from all PUB-access-nodes.
I would not call that any simpler.
APPs typically build a complex signalling/messaging plane
an indeed as persistent as possible infrastructure, where many socket-Access-Points do .bind() / .connect() to many individual access-point addresses, as desired for, using some mix of many transport-classes available { inproc:// | ipc:// | tcp:// | pgm:// | epgm:// | vmci:// }.
This is one of many strengths the designs, based on ZeroMQ, can and do enjoy.
Related
I have a web worker that crunches data when a message is received from the main thread. I've created a hot observable of those messages (using fromEvent). While the worker is crunching numbers, several messages will have come in telling the worker to re-crunch, I wanted to disregard all but the latest of those.
I've gotten something that works:
messages$.pipe(
bufferTime(16),
filter(x => x.length > 0),
map(xs => xs[xs.length -1])
);
But it strikes me as suboptimal. I don't like, for example, that a bunch of blank arrays are emitted until I've filtered them out.
Is there a simpler approach I'm overlooking? Do I need to write a custom operator to get an optimal solution?
I think you can replace those 3 operators with debounceTime(0):
messages$.pipe(
debounceTime(0)
)
several messages will have come in telling the worker to re-crunch
This approach presumes that these messages come synchronously.
So I'm new to ZeroMQ and I am trying to send byte message with ZeroMQ, using a PUB / SUB setting.
Choice of programming language is not important for this question since I am using ZeroMQ for communication between multiple languages.
Here is my server code in python:
import zmq
import time
port = "5556"
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:%s" % port)
while True:
socket.send(b'\x84\xa5Title\xa2hi\xa1y\xcb\x00\x00\x00\x00\x00\x00\x00\x00\xa1x\xcb#\x1c\x00\x00\x00\x00\x00\x00\xa4Data\x08')
time.sleep(1)
and here is my client code in python:
import zmq
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5556")
total_value = 0
for update_nbr in range (5):
string = socket.recv()
print (string)
My client simply blocks at string = socket.recv().
I have done some study, so apparently, if I were to send string using PUB / SUB setting, I need to set some "topic filter" in order to make it work. But I am not sure how to do it if I were to send some byte message.
ZeroMQ defines protocols, that guarantee cross-platform compatibility of both the behaviours and the message-content .
The root-cause:to start receiving messagesone must change the initial "topic-filter" state for the SUB-socket( which initially is "receive nothing" Zero-subscription )
ZeroMQ is a lovely set of tools, created around smart principles.
One of these says, do nothing on SUB-side, until .setsockopt( zmq.SUBSCRIBE, ... ) explicitly says, what to subscribe to, to start checking the incoming messages ( older zmq-fans remember the initial design, where PUB-side always distributes all messages down the road, towards each connected SUB-"radio-broadcast receivers", where upon each message receival the SUB-side performs "topic-filtering" on it's own. Newer versions of zmq reverse the architecture and perform PUB-side filtering ).
Anyway, the inital-state of the "topic-filter" makes sense. Who knows what ought be received a-priori? Nobody. So receive nothing.
Given you need or wish to start the job, an easy move to subscribe to anything ... let's any message get through.
Yes, that simple .setsockopt( zmq.SUBSCRIBE, "" )
If one needs some key-based processing and the messages are of a reasonable size ( no giga-BLOBs ) one may just simply prefix some key ( or a byte-field if more hacky ) in front of the message-string ( or a payload byte-field ).
Sure, one may save some fraction of the transport-layer overhead in case the zmq-filtering is performed on the PUB-side one ( not valid for the older API versions ), otherwise there is typically not big deal to subscribe to receive "anything" and check the messages for some pre-assembled context-key ( a prefix substring, a byte-field etc ) before the rest of the message payload is being processed.
The Best Next Step:
If your code strives to go into production state, not to remain as just an academia example, there will have to be much more work to be done, to provide surviveability measures for the hostile real-world production environments.
An absolutely great perspective for doing this and a good read for realistic designs with ZeroMQ is Pieter HINTJEN's book "Code Connected, Vol.1" ( may check my posts on ZeroMQ to find the book's direct pdf-link ).
Plus another good read comes from Martin SUSTRIK, the co-father of ZeroMQ, on low-level truths about the ZeroMQ implementation details & scale-ability
How and why sensor node Id is used ?
what are the different type of kind of a sensor node?
why kind is used ?
i need some example of using kind in sensor node application.
Need references and help.
setKind() and getKind() are used to set different type of sensor nodes you want to create through the msg packet they send.you can setKind() for different type of message packets. for example for mobile node you may want to get location of node through msg packet you received you can set a kind to that msg packet at sending end let call it "positionMsg". similarly you can use getKind() ar receiving end to identify that whether the message was for positionMsg or something else.you can give identity to node as well as message using these methods. Similarly you can setKind() for msg packet delivered by different nodes.And at receiving end you can getKind() to identify from which node message was sent.
similarly you can use setId() and getId() method to use identify different sensor nodes. These methods are useful when you are working with multiple types of nodes .
Sensor nodes can be of different types.It all depends on yours implementation. It is implemented using simple module or compound module.Its nature may vary upon implementation.You can use them as router / gateway,cluster head , mobile node(data mules) etc depending upon your need.
For more information you can use omnet++ user manual 1>https://omnetpp.org/doc/omnetpp/Manual.pdf
use this link for implementation guide https://omnetpp.org/doc/omnetpp/api/classcMessage.html
Use tictoc tutorial and and use these methods with cMessage objects.
I want my web application to push live updates notifications to the clients.
I use common lisp and hunchentoot on ccl.
What libraries I should use?
I have found clws and hunchensockets.
Latter one is not recommended for production use.
I need production level code.
For the first one, clws, at the github there is an example. But I could not figure out how to send data to the client without sending a message from the client and by just opening socket connection form the client.
Seemingly there is not much difference from the classical http style, iff client requests then server responses. What am i missing there?
Here's a trick for finding example code:
https://github.com/search?l=common-lisp&q=defsystem+clws&ref=searchresults&type=Code
Of course, these examples vary in quality.
A similar approach may to work at other larger code hosting services.
One should use the
write-to-client-text
or
write-to-clients-text to send server initiated responses to the client, for one client and many or all of them, respectively.
one should first have list of clients that is connected to the resource created at the examples, by creating a class for the resource like that.
(defclass echo-resource (ws-resource)
((clients :initform () :accessor clients)))
what is not mentioned in the examples there is to name this resource instance once defined to later use it.
(setf res1 (make-instance 'echo-resource))
(register-global-resource "/echo"
res1
(origin-prefix "http://127.0.0.1" "http://localhost"))
then you can gather the list of the connected clients to this resource by clients accessor of the class echo-resource
(clients res1)
now the functions I mentioned at top can be used from this package like that
(write-to-client-text (car (clients res1)) "new message to one")
(write-to-clients-text (clients res1) "<p id='mesagetoall'>new message to all</a>")
How do I scale RabbitMQ consumers in Ruby using the AMQP gem?
I read the documentation and came up with something that (seemingly) works
in a trivial example. The examples scale horizontally. New processes connect
to the broker and receive a subset of messages. From there each process can
spin up multiple consumer threads. It uses the low level consumer interface
described in the documentation.
Here's the code:
require 'amqp'
workers = ARGV[1] || 4
puts "Running #{workers} workers"
AMQP.start do |amqp|
channel = AMQP::Channel.new
channel.on_error do |conn, ex|
raise ex.reply_text
end
exchange = channel.fanout 'scaling.test', durable: true, prefetch: 1
queue = channel.queue("worker_queue", auto_delete: true).bind(exchange)
workers.times do |i|
consumer = AMQP::Consumer.new channel, queue, "consumer-#{i}", exclusive = false, manual_ack = false
consumer.consume.on_delivery do |meta, payload|
meta.ack
puts "Consumer #{consumer.consumer_tag} in #{Process.pid} got #{payload}"
end
end
trap('SIGTERM') do
amqp.start { EM.stop }
end
end
There are a few things I'm unsure of:
Does the exchange type matter? The documentation states a direct exchange load balances
messages between queues. I tested this example using direct and fanout exchanges and it
functioned the same way. So if I'd like to support vertical and horizontal scaling does the
exchange type matter?
What should the :prefetch option be? I thought one would be best.
How does the load balancing work specifically? The documentation states that load
balancing happens between consumers and not between queues. However when I run two
processes I can see process one print out: "1, 2, 3, 4", then process two print out
"5, 6, 7, 8". I thought they would be out of order, or is the channel itself the consumer?
This would make sense in accordance to the output but not the documentation.
Does this look correct from the EventMachine perspective? Do I need to do some sort of
thread pooling to get multiple consumers in the same process working correctly?
Most of this is actually covered in the documentation for brokers like RabbitMQ, but in answer to your questions:
For a worker queue you most likely want a direct exchange, that is, one which will route a message (job) to one worker exactly, and not to multiple workers at once. But this might change depending on your work. Fanout by definition should route the same message to multiple consumers.
Prefetch should be 1 for this time of setup. Generally speaking this asks the broker to fill the consumer's network buffer with 1 message until ack'd. An alternate setup would be that you have 1 consumer and n workers, in which case you would set the prefetch to n. Additionally it is worth noting that in this sort of setup you shouldn't ack until after you've done the work.
Load balancing is basically a round-robin between consumers. Which is why you're seeing everything sequentially. If each bit of work takes a different amount of time you'll see the distribution change.
I hope that helps. I haven't done anything with the Ruby AMQP library for a while — we rewrote all our workers in Go.