Resuming persistent sessions while switching to a different mosquitto broker - session

Can anyone tell me how I can resume persistent session on a different broker while I am switching brokers for load balancing in mosquitto. I am really confused and can't find a way out.

Short answer, you don't.
All the persistent session information is held in the broker and mosquitto has no way to share that information between instances

Related

Can you check to see if an IBM MQ topic is up and available through a Java application before attempting to create a connection?

I would like to add some conditional logic to our Java application code for attempting to create a JMS Topic Connection. I have seen problems in the past stemming from attempting to create a connection when the MQ server had been restarted or was currently down. One improvement I added was to check for the quiescent state, and another was to increase the timer before attempting reconnection to our durable topic queue.
Is there a way to confirm with the MQ server/topic/channel that it is up and running and a connection request can safely be made?
The best way to confirm that a queue manager (and the channel you are using to connect to the queue manager) is up and running is to attempt to connect to it.
If your connection attempt fails, you will get an MQ Reason code telling you exactly why. This is a much better way to confirm than any administrative command, because it also confirms that your application, and it's security context is correct and able to connect to the queue manager. It is completely possible to have an up-and-running queue manager but an application that is not yet correctly configured to use it. So connect from the application and if it works, the queue manager is up-and-running.
Your comment about having an increased timer before attempting to reconnect after a failure is well made. It doesn't help anyone if you hammer the queue manager with lots of repeated and close together connection attempts until it is ready to accept your connection, but still anything that is going to test the availability of the queue manager needs to ultimately connect to it, so very simply, just connect.

how to terminate inactive websocket connection in passenger

Past few days we are struggling with inactive websocket connections. The problem may lay on network level. I would like to ask if there is any switch/configuration option to set timeout for websocket connection for Pushion Passenger in standalone mode.
You should probably solve this at the application level, because solving it in other layers will be more ugly (less knowledge about websocket).
With Passenger Standalone you could try to set max_requests. This should cause application processes to be restarted semi-regularly, and when shutting down a process Passenger should normally abort websocket connections.
If you want more control over the restart period you could also use for example a cron job that executes rolling restarts every so often, which shouldn't be noticeable to users either.
Websockets in Ruby and Passenger (and maybe node.js as well) aren't "native" to the server. Instead, your application "hijacks" the socket and controls all the details (timeouts, parsing etc').
This means that a solution must be implemented in the application layer (or whatever framework you're using), since Passenger doesn't keep any information about the socket any more.
I know this isn't the answer you wanted, but it's the fact of the matter.
Some approaches use native websockets where the server controls websocket connections (timeouts, parsing etc', i.e. the Ruby MRI iodine server), but mostly websockets are "hijacked" from the server and the application takes full control and ownership of the connections.

more than one listener for the queue manager

Can there more than one listener to a queue manager ? I have used one listener/queue manager combination so far and wonder if this possible. This is because we have 2 applications connecting to same queue manager and seems to have problem with that.
There are a couple meanings for the term listener in an MQ context. Let's see if we can clear up some confusion over the terminology and then answer the question as it relates to each.
As defined in the spec, a JMS listener is an object that implements a callback mechanism. It listens on destinations for messages and calls onMessage when they arrive. The destinations may be queues or topics hosted by any JMS-compliant transport provider.
In IBM MQ terms, a listener is a process (runmqlsr) that handles inbound connection requests on a server. Although these can handle a variety of protocols, in practice they are almost exclusively TCP listeners that bind a port (1414 by default) and negotiate connection requests on sockets.
TCP Ports
Tim's answer applies to the second of these contexts. MQ can listen for sockets on multiple ports and indeed it is quite common to do so. Each listener listens on one and only one port. It may listen on that port across all network interfaces or can be bound to a specific network interface. No two listeners can bind to the same combination of interface and port though.
In a B2B context the best practice is to run a dedicated listener for each external business partner to isolate each of their connections across dedicated access paths. Internally I usually recommend separate ports for QMgr-to-QMgr, app-to-QMgr and interactive user connections.
In that sense it is possible to run multiple listeners on a given QMgr. Each of those listeners can accept many connections. Their job is to negotiate the connection then hand the socket off to a Message Channel Agent which talks to the QMgr on behalf of the remotely connected client or QMgr.
JMS Listeners
Based on comments, Ulab refers to JMS listeners. These objects establish a connection to a queue manager and then wait in GET mode for new messages arriving on a destination. On arrival of a message, they call the onMessage method which is an asynchronous callback routine.
As to the question "can there more than one (JMS) listener to a queue manager?" the answer is definitely yes. A multi-threaded application can have multiple listeners connected, multiple application instances can connect at the same time, and many thousands of application connections can be handled by a single queue manager with sufficient memory, disk and CPU available.
Of course, each of these applications is ultimately connected to one or more queues so then the question becomes one of whether they can connect to the same queue.
Many listeners can listen on the same queue so long as they do not get exclusive access to it. Each will receive a portion of the messages arriving.
Listeners on QMgr-managed subscriptions are exclusively attached to a dynamic queue but multiple instances on the same topic will all receive the same messages.
If the queue is clustered and there is more than one instance of it multiple listeners will be required to get all the messages since they will normally be distributed by MQ workload distribution across those instances.
Yes, you can create as many listeners as you wish:
http://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.explorer.doc/e_listener.htm
However, there is no reason why two applications can't connect to the queue manager via the same listener (on the same port). What problem have you run into?

NServiceBus DBMS connection timeout

I use Nservicebus with Oracle Queues OAQ instaed of MSMQ.
I have a problem working with a dbms server that is shutdown every day at the same time.
In particular when my nservicebus host can't get the dbms connection it starts logging on.
When the dbms is restarted my host restart or not randomly! However restarting my host everything is ok!
Another detail is that when my nservicebus host can't restart it logs a 'connection timeout message' every 15 seconds!
What's the behavior of NserviceBus when it's reading from a queue and the dbms crash? What could i do to solve this problem?
thank you,
R
I'm afraid the problem you're facing is the result of the design of your system. By having the queues in the DB, when the DB becomes unavailable, so do the queues. NServiceBus assumes that it is always able to communicate with its queues, as is the case when using a distributed/federated queuing system like MSMQ.
You can look at what some people in the community have done to combat this same problem when they were using IBM MQ (http://code.google.com/p/nservicebuswmq/) - ultimately falling back to MSMQ under those conditions and then syncing back up with MQ when it came back online.

ActiveMQ network of brokers connectivity scheme

I need to scale up my ActiveMQ solution so I have defined a network of brokers.
I'm tring to figure out how to connect my producers and consumers to the cluster.
does each producer has to be connected to a single broker (with the failover uri for availability)? in this case how can I guarentry the distribution of traffic accross the brokers? do I need to configure the producers to connect each to a diffrent broker?
should I apply the same schema for the consumers?
This makes the application aware of the cluster topology, which I hope can be avoided by a discent cluster
Tx
Tomer
I strongly suggest you carefully read through the documentation from activemq.apache.org on clustering ActiveMQ. There are a lot of very helpful tips.
From what you have written I suggest you pay special attention to this. At the bottom of the page it details how you can control from server side the failover/failback configuration for your producers.
For example:
updateClusterClients - if true pass information to connected clients about changes in the topology of the broker cluster
rebalanceClusterClients - if true, connected clients will be asked to rebalance across a cluster of brokers when a new broker joins the network of brokers
updateURIsURL - A URL (or path to a local file) to a text file containing a comma separated list of URIs to use for reconnect in the case of failure
In a production active system then I would think that making use of updateURIsURL would make it a lot less painful scaling out.

Resources