I have multiple senders connecting to my receiver simultaneously via Chrome. When one of the senders disconnects (closes browser window, refreshes the page, loses connection... etc) I want the "onSenderDisconnected" event to fire on the receiver.
So far, this only seems to happen when connection is lost for whatever reason. If the sender simply refreshes, for example, the sender is never dropped and a new sender is created. That means I might have 2 senders "connected" from the same source.
Is there any way I can either drop the old sender when one reconnects or is there a way to keep the same senderID on reconnect?
I also want to give each sender the option of disconnecting from the session manually (with a button). The only way to do this currently is to stop casting to the device, but that ends the session for all users. How might I go about this?
For the first issue, I am handling it by keeping track of my "Senders" manually.
Each sender has an "id". Something like 2:client:23522. The first number before the colon (2) seems to stay the same no matter how many times that specific client reconnects to the session. The 2nd number (23522) changes each time.
By checking the first number of a connecting client you can determine whether it is a new client or an existing client that reconnected. That way, when sending messages back and forth between the client/sender you can keep an array of the "active" clients and forget about the old unused client id's.
For the first item (i.e. adding more senders upon each reload), I have opened an internal issue to investigate that. For the second one, currently you cannot just disconnect your chrome sender without stopping the application; you can either (a) Stop application + disconnect (if you use the cast extension) or (b) close the tab which is like "nothing has happened". We are considering bringing that closer to the APIs on the other two platforms.
Related
I have a ZMQ_PULL/ZMQ_PUSH socket connection.
I have multiple ZMQ_PUSH connections pushing to a single ZMQ_PULL connection.
ZMQ_PUSH connection 1----->
ZMQ_PUSH connection 2-----> ZMQ_PULL
ZMQ_PUSH connection N----->
I do not need every message, I just need the latest message that was sent. I am doing some inference on the back end and am streaming the results to the ZMQ_PULL socket.
I have set the ZMQ_PULL socket to Conflate=true
"If set, a socket shall keep only one message in its inbound/outbound queue, this message being the last message received/the last message to be sent. Ignores ZMQ_RCVHWM and ZMQ_SNDHWM options."
But after testing I realize I actually need the last message of each connection, not just the last message. So, if 3 connections, it grabs in a round robin from each connection, so I constantly have the latest from each connection.
Is there an option that is like Conflate, but instead of for all messages, it is for each connection?
Docs: http://api.zeromq.org/4-0:zmq-setsockopt
Is there an option that is like Conflate, but instead of for all messages, it is for each connection?
No.
The documentation you cite explains that 0MQ does not currently
offer direct support for such a single-socket use case.
You could certainly code it up and submit an upstream PR
so that future revs of 0MQ offer such functionality.
Given that you'll need app-level support to make
this work with 0MQ 4.3, simplest approach would
be to maintain N ZMQ_PULL sockets with ZMQ_CONFLATE
set, as you're already aware.
An alternate approach would be to assign a dedicated
thread or process to keep draining the existing muxed
socket, and update a shared memory data structure
that interested clients could consult.
The idea is to burn a core on keeping the queue
mostly empty, while doing no processing,
just focusing on communications.
Then other cores can examine "most recent message"
and each one then embarks on some expensive processing,
while another core continues to keep the queue drained.
This is essentially offering the 0MQ service proposed
above but at a different place in the stack,
up a level, within your application.
To do this in a distributed way,
the "queue draining service" would need to
know about idle workers.
That is, a worker could publish a brief
"I just completed an expensive task" message,
which would trigger the drainer to post
a fresh work item, never using shared memory at all.
This lets the drainer worry about eliding dup messages
that arrived when no one was available to immediately
start work on them, which have been superseded by a
more recent message.
I read a lot of questions around the same use case but couldn't find any proper answer on Google.
One theory is the Server keep a long poll at sender (A) and whenever a typing event is triggered it sends update to server.
On the receiver's end (B), it keep another long polling request to the server and as soon as the server get's update from sender(A) it sends it to the receiver (B).
But this seems dubious, in the sense that servers have to handle so many (~millions) of long polling requests at any given time and will slow down the servers.
Most chat systems probably keep open P2P TCP connections with their clients.
This solves many issues in chat systems such as avoiding polling on multiple scenarios, not only the ones you mentioned but receiving new messages as well.
I recommend checking out this video
In this type of design "Someone is typing" becomes just one new type of event, it has a SRC and a DST and the Backend infrastructure takes care of routing the event to the correct websocket for the DST client.
I imagine as others mentioned that the clients will throttle the events as to avoid network traffic on every key press.
I have a PUB server. How can it tell what filters are subscribed to, so the server knows what data it has to create?The server doesn't need to create data once no SUB clients are interested in.
Say the set of possible filters is huge ( or infinite ), but subscribers at any given time are just subscribed to a few of them.
Example: Say SUB clients are only subscribed to a weather feed data for a few area codes in New York and Paris. Then the PUB server shouldn't have to create weather data for every other area code in every other city in the world, just to throw it all away again.
How do you find out all the subscribed to filters in a PUB server?
If there is no easy way, how do I solve this in another way?
I'll answer my own question here in case its of use to anyone else.
The requirements where:
The client should be able to ask the server what ids (topics) are available for subscription.
The client should chooses the id's it is interested in and tell the server about it.
The server should created data for all subscribed too id's and send that data to clients.
The client and server should not block/hang if either one goes away.
Implementation:
Step 1. Is two way traffic, and is done with REQ/REP sockets.
Step 2. Is one way traffic from one client to one server, and is done by PUSH/PULL sockets.
Step 3. Is one way traffic from one server to many clients, and is done by PUB/SUB sockets.
Step 4. The receives can block either the server or client if the other one is not there. Therefore I followed the "lazy pirate pattern" of checking if there is anything to receive in the queue, before I try and receive. (If there is nothing in the queue I'll check again on the next loop of the program etc).
Step 4+. Clients can die without unsubscribing, and the server wont know about it, It will continue to publish data for those ids. A solution is for the client to resends the subscription information (with a timestamp) every so often to the server. This works as a heartbeat for the ids the client has subscribed too. If the client dies without unsubscribing, the server notices that some subscription ids have not been refreshed in a while (the timestamp). The server removes those ids.
This solution seems to work fine. It was a lot of low level work though. It would be nice if zeromq was a bit higher level, and had some common and reliable architectures/frameworks ready to use out of the box.
I have to check the IBM MQ queue manager status before opening a queue.
I have to create requestor app by checking that the QMgr is active or not then call put msg or get message from MQ
Is it possible to check the status,
please share some code snippets.
Thanks
You should NEVER have to check the QMgr before opening a queue. As I responded to a similar question today, the design proposed is a very, VERY bad design. The effect is to turn async messaging back into synchronous messaging. This couples message producers to consumers, introduces location and resolution dependencies, breaks clustering, defeats WMQ's load distribution and balancing, embeds network topology into the application, and makes the whole system brittle. Please do not blame WMQ for not working correctly after intentionally defeating all its best features except the actual queue/dequeue operations.
If your requestor app is checking that the QMgr is active, you are much better off using a multi-instance connection name and a layer of two or more functionally equivalent QMgrs that can access the cluster. So long as one of the QMgrs is up, the app will cycle between them until it finds one at which to connect.
If your responder app is checking that the QMgr is active, you are much better off just attempting to connect. Responder apps should never fail over to a different QMgr since doing so breaks transactionality and may leave queues unserviced. Instead just ensure that each queue has at least two input handles from local responder apps that do not fail over across QMgrs. (It is OK if the QMgr itself fails over using hardware clustering or multi-instance QMgr though).
If the intent is to check that there's an open input handle on the queue before putting messages there a better design is to have the requesting app not care to which queue instance the messages are routed and instead use the instrumentation built into WMQ to either restart responder apps that lose their input handle, or to disable the queue when nothing's listening.
I've been studying the vagaries of channel statuses, how they get to those states and what to do to get them stopped or started. I've got a pretty solid understanding now, but a colleague brought up the topic of channel resets.
I've done them occasionally when I couldn't explain what was going on, but now I understand things a bit better I'm not sure his advice to "always reset" when stopping troublesome channels is the right advice.
Searching for info online, it's clear that when recreating channels it is obvious a reset would be needed but in the case if stuff just breaking – whether a queue manager is unexpectedly dropped or the network breaks or stuff like that – is a reset a good idea in general or should I only bother if I see sequence errors or it otherwise refuses to start when I know it should?
FYI, if you are resetting from the sending side of the channel, its OK to set the sequence number to 1. The receiving side will then also go back to 1. QED :-)
If you are resetting from the receiving side of the channel, you must use the sequence number that the sender was expecting.
These numbers are in the queue manager error logs on both sides.
If the channel is in RETRY state, it will try to use the new sequence numbers when it does the next retry. This could be up to 20 minutes away if you are using the default retry attributes on the sender channel. A simple way to bump this is to STOP the channel and then START it again straight away.
HTH, G.
Channels get sequence errors for a few reasons:
The local and remote MCAs got out of sync on a batch. Usually the remote MCA committed the batch but the local one did not. If you know the remote side delivered the batch, issue a RESOLVE ACTION(COMMIT) on the channel, otherwise issue RESOLVE ACTION(COMMIT). After resolving, issue RESET.
The channel points to a new QMgr. Perhaps after failover at the DNS, circuit or firewall NAT, a different QMgr of the same name is now attached to the channel. These should be well known because the failover (hopefully) doesn't happen without some alerts going off.
The contents of the channel sync queue are in error. Sometimes the QMgr can cause this but those issues are resolved (so far as I know) in recent versions. Sometimes people accidentally mess up the sync queue, usually by browsing it with a lock while the channels are trying to use it. This is a little harder to resolve and may require clearing the sync queue but check with IBM Support first.
When the channel is out of sync because of a known exception like failover, go ahead and reset it. Otherwise, you'd be well advised to find out why it's out of sync. You might reset it just to get it up and running, but hopefully not until you've saved off the <QMGR>/errors/AMQERR*.LOG files and any FDCs so you can diagnose the cause.