Active MQ Artemis, most of the addresses and queue disappeared after HA failover - high-availability

Ubuntu 18.04
Artemis 2.14
I've been experimenting with HA. Usually I can shut down the primary, and the secondary comes alive, with all the addresses and queues. But today I shut the primary down and the secondary came to life but with only a few of the addresses and queues. Some addresses appeared with no queues, some with only one, but most were totally missing.
I started the primary broker again, and HA switched back, but still without all the objects. They're all setup the same, for the most part.
I created the objects (addresses and queues) through the console, then used the data tools to export them from the journal and an import them into this instance -- which I'm preparing to run as the production instance.
What would cause the objects to disappear? Should I instead define them in the broker.xml file?

Related

ActiveMQ offline message transfer on database level

we are running an in-house EAI system using ActiveMQ as the message broker using JDBC persistence.
There we have a cold-standby failover solution each one having an own database schema (due to several reasons).
Now if the primary goes down and we want to startup the backup we would like to transfer all undelivered messages on database level from the one node to the other.
Having a look at the table "ACTIVEMQ_MSGS" made us unsure if we can do this without any drawbacks or side effects:
There is a column "ID" without any DB-sequence behind - can the backup broker handle this?
The column "MSGID_PROD" contains the host name of the primary server - is there a problem if the message should be processed by a broker with a different name?
There is a column "MSGID_SEQ" (which seems to be "1" all the time) - what does this mean? Can we keep it?
Thanks and kind regards,
Michael
I would raise a big red flag about this idea. Well, yes, in theory you could well succeed with this, but you are not supposed to touch the JDBC data piece by piece.
ActiveMQ has a few different patterns for master/slave HA setups. Either using a shared store for both the master and the slave, or use a replicated store (LevelDB+ZooKeeper).
Even a shared JDBC store could be replicated, but on the database level.
Ok, so you somehow want another setup than the official ones, fine. There is a way, but not using raw SQL commands.
By "Primary goes down", I assume you somehow assumes the primary database is still alive to copy data from. Fine. Then have a spare installation of ActiveMQ ready (on a laptop, on the secondary server or anywhere safe). You can configure that instance to connect to the "primary database" and ship all messages over to the secondary node using "network of brokers". From the "spare" broker, configure a network connection to the secondary broker and make sure you specify the "staticBrige" option to true. That will make the "spare" broker hand over all unread messages to the secondary broker. Once the spare broker is done, it can be shut down and the secondary should have all messages. This way, you can reuse the logic in whatever ActiveMQ version you have and need not to worry about ID sequences and so forth.

What are the implications of using NFS3 file system for multi-instance queue managers in WebSphere MQ

We are stuck in a difficult scenario in our new MQ infrastructure implementation using multi-instance queue managers using WebSphere MQ v7.5 in Linux platform.
The concern is our Network Team is not able to configure NFS4 and hence we are still having the NFS3 version. We understand multi-instance queue managers will not function properly with NFS3. But are there any issues if we define queue managers in multi-instance fashion in NFS3 and expect to work perfect for single instance mode.
Thanks
I would not expect you to have issues running single-node queue managers with NFS3, we do so on a regular basis. The requirement for NFS4 was for the file locking mechanism required by multi-instance queue managers to determine when the primary instance has lost control and an a secondary queue manager should take over.
If you do define the queue manager as multi-instance, and the queue manager attempt to failover, it may not do so successfully, at worst it may corrupt your queue manager files.
If you control the failover yourself - as in, shutdown the queue manager on one node and start it again on another node - that should work for you, as there is no file sharing taking place and all files would be shutdown on the primary node before being opened on the secondary node. You would have to make sure the secondary queue manager is NOT running in standby node -- ever.
I hope this helps.
Dave

Muliple Websphere Application Servers attached to a single Websphere MQ Failing

Issue:
Having multiple consumer applications active specification attached to a single MQ on distributed VM servers is causing a null payload in a MQ Message.
Note: See notes at the bottom. No issue with mq.
Details:
I have 3 Websphere applications deployed across 2 VM servers. 1 application is a publisher and the other 2 applications are consumers attached to a single MQ Manager, and MQ.
2 consumer applications are pulling off the messages and processing them. The consumer application on the separate server receives a null payload. I have confirmed that its seems to be an issue having multiple application server instances attached to MQ. Confirmed by deploying the publisher on server 2 with consumer 2, then consumer 1 fails.
Question:
Has anyone tried attaching multiple MDB applications deployed on separate server instances bind to one Queue Manager and one MQ?
Specifications:
Websphere 7, EJB 3.0 MDB's,Transactions turned off,Queue in a queue installed on another machine.
Goal:
Distributed computing, scaling up against large number of messages.
I'm thinking this is a configuration issue but not 100% sure where to look. I had read that you could use MQLink, but I don't see why I would need to use service bus integration.
Supporting Doucmentation:
[MQ Link][1
UPDATE: I fixed the problem and it was a related to a combination of class loader issue with a duplicate classes. See Solution Notes below I added.
EDIT HISTORY:
- Clarified specifications, clarified question and added overall goal.
- reference notes to solution.
Has anyone tried attaching multiple MDB applications deployed on
separate server instances bind to one Local MQ?
Multiple MDB applications deployed on separate servers, connecting to one Queue Manager(but different queue) is a normal scenario and we have it everywhere and all applications work fine.
But, I suspect what you are doing is: Multiple MDB applications deployed on separate servers, connecting to one Queue Manager and listening to the same queue.
In this case one message will be received by one consumer only.
You need to create separate queue for each application and create subscriptions for each for the topic being published by your publisher.
Addition:
I suspect, for load balancing the problem you may be facing is that, when your first application gets the message, it doesn't issue a commit. So, there will be an uncommited message in the queue, which may be stopping your other application from getting message from the queue. When your first application finishes its processing, it issues a commit, but then again it is ready to pick the message and hence it again issues a get.
In my architecture, we have implemented load balancing using multiple queue managers like below:
You create 3 queue managers, say GatewayQM, App1QM and App2QM.
Keep the three queue managers in the same cluster.
Create an alias queue(shared in cluster) in GatewayQM and ask your putting app to put message in the gateway queue.
Now create one local cluster queue in each of App1QM and App2QM. Read from these queues via your applications App1 and App2 respectively.
This implementation provides you better security and serves a perfect load balancer.
This specific problem was caused by a code issue and the combination of class loading being set to "Parent First" in the Websphere console. On one node it would work and the other nodes in a cluster would fail, I think this was caused by the "Parent First" setting.
More important, in terms of my configuration in a cluster binding multiple active specifications to a single MQ to provide distributed computing is a correct solution.
However "points" due go to "nitgeek" solution references above if you are looking for a extremely high volume solution. Its important to understand that a single MQ can have a very high depth and takes a lot to fully utilize one. My current configuration is a good starting point for quick configuration and distributed processing using Multiple MDB's.

Lost messages when migrating RabbitMQ from one EC2 instance to another

I have RabbitMQ installed and working well on an EC2 CentOS 6 instance, with an assortment of queues and topics. I decided to migrate this working instance to another, new EC2 server instance with the same OS and initial setup, just smaller.
I created an AMI (Amazon server image) from the existing installation, and then used this AMI to create a new server instance. RabbitMQ came up just fine, as did all the topics, users, virtual hosts, queues, etc.
However, the queues all came back with 0 messages in them, although messages did exist in the queues before creating the server image.
Questions:
Did I miss something in my migration?
Where are messages are explicitly 'stored' while they're within rabbit queues?
I believe the messages were sent as 'Persistent' but not 100% sure about that. I am aware of replication of RabbitMQ instances, but figured this method of server recreation would be simpler/quicker?
#robthewolf's comments got be searching some more but with a slightly different slant (around whether one could explicitly save off queue messages in a backing database/key-value store)
That led me to this old, but seemingly still-relevant blog post that clearly describes rabbit's current 'persistence' methods for all cases (persistent publishing, durable, etc.)
http://www.rabbitmq.com/blog/2011/01/20/rabbitmq-backing-stores-databases-and-disks/
If messages were persistent you can check this SO question - RabbitMq uses Mnesia storage which is connected to ip address of machine it is running on, so few tweaks in the answers there can resolve the issue.

Websphere MQ and High Availability

When I read about HA in Websphere MQ I always come to the point, when the best practise is to create two Queue Managers handling the same queue and use out-of-the-box load balancing. Therefore, when one is down, the other takes over his job.
Well, this is great but what about the messages in the queue that belong to the Queue Manager that went down? I mean do these messages reside there (when queue is persistent of course) until QM is up and running again?
Furthermore, is it possible to create a common storage for this doubled Queue Managers? Then no message would wait for the QM to be up. Every message would be delivered in the proper order. Is this correct?
WebSphere MQ provides different capabilities for HA, depending on your requirements. WebSphere MQ clustering uses parallelism to distribute load across multiple instances of a queue. This provides availability of the service but not for in-flight messages.
Hardware clustering and Multi-Instance Queue Manager (MIQM) are both designs using multiple instances of a queue manager that see a single disk image of that queue manager's state. These provide availability of in-flight messages but the service is briefly unavailable while the cluster fails over.
Using these in combination it is possible to provide recovery of in-flight messages as well as availability of the service across multiple queue instances.
In hardware cluster model the disk is mounted to only one server and the cluster software monitors for failure and swaps the disk, IP address and possibly other resources to the secondary node. This requires a hardware cluster monitor such as PowerHA to manage the cluster.
The Multi-Instance QMgr is implemented entirely within WebSphere MQ and needs no other software. It works by having two running instances of the QMgr pointing to the same NFS 4 shared disk mount. Both instances compete for locks on the files. The first one to acquire a lock becomes the active QMgr. Because there is no hardware cluster monitor to perform IP address takeover this type of cluster will have multiple IP addresses. Any modern version of WMQ allows for this using multi-instance CONNAME where you can supply a comma-separated list of IP or DNS names. Client applications that previously used Client Channel Definition Tables (CCDT) to manage failover across multiple QMgrs will continue to work and CCDT continues to be supported in current versions of WMQ.
Please see the Infocenter topic Using WebSphere MQ with high availability configurations for details of hardware cluster and MIQM support.
Client Channel Definition Table files are discussed in the Infocenter topic Client Channel Definition Table file.

Resources