I have RabbitMQ installed and working well on an EC2 CentOS 6 instance, with an assortment of queues and topics. I decided to migrate this working instance to another, new EC2 server instance with the same OS and initial setup, just smaller.
I created an AMI (Amazon server image) from the existing installation, and then used this AMI to create a new server instance. RabbitMQ came up just fine, as did all the topics, users, virtual hosts, queues, etc.
However, the queues all came back with 0 messages in them, although messages did exist in the queues before creating the server image.
Questions:
Did I miss something in my migration?
Where are messages are explicitly 'stored' while they're within rabbit queues?
I believe the messages were sent as 'Persistent' but not 100% sure about that. I am aware of replication of RabbitMQ instances, but figured this method of server recreation would be simpler/quicker?
#robthewolf's comments got be searching some more but with a slightly different slant (around whether one could explicitly save off queue messages in a backing database/key-value store)
That led me to this old, but seemingly still-relevant blog post that clearly describes rabbit's current 'persistence' methods for all cases (persistent publishing, durable, etc.)
http://www.rabbitmq.com/blog/2011/01/20/rabbitmq-backing-stores-databases-and-disks/
If messages were persistent you can check this SO question - RabbitMq uses Mnesia storage which is connected to ip address of machine it is running on, so few tweaks in the answers there can resolve the issue.
Related
We are going to work on springboot application which will be deployed on two ECS containers to support the cluster environment. This application will accept the request and drop message into SQS. Another flow in the application should pick the message from queue and process it. As same application will be running on two different servers in cluster environment, I am not sure which server will pick the message from queue. How can I make sure that only one server picks up the message from queue. It could be either server.
Ordinary SQS queues do not even guarantee that a message only appears once on the queue - see AWS Standard SQS Queue docs
Using a reasonable value for visibility timeout, the time that a message can’t be seen by other consumers, vs the time it takes to consume a message should solve it.
Alternatively you can use an SQS FIFO queue but it’s much slower and can, in my experience, get stuck on a corrupt message.
We have multiple application environments (development, QA, UAT, etc) that need to connect to fewer provider environments through MQ. For example, the provider only has one test (we'll call it TEST1) environment to which all of the client application environments need to interact. It is imperative that each client environment only receives MQ responses to the messages sent by that respective environment. This is a high volume scenario so correlating message IDs has been ruled out.
Right now TEST1 has a queue set up and is functional, but if one of the client app's environments wants to use it the others have to be shut off so that messaging doesn't overlap.
Does MQ support a model having multiple clients connect to a single queue while preserving the client-specific messaging? If so, where is that controlled (i.e. the channel, queue manager, etc)? If not, is the only solution to set up additional queues for each corresponding client?
Over the many years I have worked with IBM MQ, I have gone back and forth on this issue. I've come to the conclusion that sharing a queue just makes life more difficult. Queues should be handed out like candy on Halloween. If an application team says that they have 10 components to their application then the MQAdmin should give them 10 queues. To the queue manager or server or CPU or hard disk, there is no difference in resource usage.
Also, use an MQ naming standard that makes sense and is easy to apply security to. i.e. for HR (Human Resource) department
HR.PAYROLL.SALARY
HR.PAYROLL.DEDUCTIONS
HR.PAYROLL.BENEFITS
HR.EMPLOYEE.DETAILS
HR.EMPLOYEE.REVIEWS
etc...
You could use a selector such as MQGET(where applname="myapp") or based on a specific user-defined property assuming the sender populates such a property but that's likely to be worse performance than any retrieval by msgid or correlid. Though you've not given any information to demonstrate that get-by-correlid is actually problematic.
And of course any difference between a test and production environment - whether it involves code or configuration - is going to be very risky.
You would not normally share a single destination queue between multiple different application types - multiple queues is far more standard.
Ubuntu 18.04
Artemis 2.14
I've been experimenting with HA. Usually I can shut down the primary, and the secondary comes alive, with all the addresses and queues. But today I shut the primary down and the secondary came to life but with only a few of the addresses and queues. Some addresses appeared with no queues, some with only one, but most were totally missing.
I started the primary broker again, and HA switched back, but still without all the objects. They're all setup the same, for the most part.
I created the objects (addresses and queues) through the console, then used the data tools to export them from the journal and an import them into this instance -- which I'm preparing to run as the production instance.
What would cause the objects to disappear? Should I instead define them in the broker.xml file?
we are running an in-house EAI system using ActiveMQ as the message broker using JDBC persistence.
There we have a cold-standby failover solution each one having an own database schema (due to several reasons).
Now if the primary goes down and we want to startup the backup we would like to transfer all undelivered messages on database level from the one node to the other.
Having a look at the table "ACTIVEMQ_MSGS" made us unsure if we can do this without any drawbacks or side effects:
There is a column "ID" without any DB-sequence behind - can the backup broker handle this?
The column "MSGID_PROD" contains the host name of the primary server - is there a problem if the message should be processed by a broker with a different name?
There is a column "MSGID_SEQ" (which seems to be "1" all the time) - what does this mean? Can we keep it?
Thanks and kind regards,
Michael
I would raise a big red flag about this idea. Well, yes, in theory you could well succeed with this, but you are not supposed to touch the JDBC data piece by piece.
ActiveMQ has a few different patterns for master/slave HA setups. Either using a shared store for both the master and the slave, or use a replicated store (LevelDB+ZooKeeper).
Even a shared JDBC store could be replicated, but on the database level.
Ok, so you somehow want another setup than the official ones, fine. There is a way, but not using raw SQL commands.
By "Primary goes down", I assume you somehow assumes the primary database is still alive to copy data from. Fine. Then have a spare installation of ActiveMQ ready (on a laptop, on the secondary server or anywhere safe). You can configure that instance to connect to the "primary database" and ship all messages over to the secondary node using "network of brokers". From the "spare" broker, configure a network connection to the secondary broker and make sure you specify the "staticBrige" option to true. That will make the "spare" broker hand over all unread messages to the secondary broker. Once the spare broker is done, it can be shut down and the secondary should have all messages. This way, you can reuse the logic in whatever ActiveMQ version you have and need not to worry about ID sequences and so forth.
Issue:
Having multiple consumer applications active specification attached to a single MQ on distributed VM servers is causing a null payload in a MQ Message.
Note: See notes at the bottom. No issue with mq.
Details:
I have 3 Websphere applications deployed across 2 VM servers. 1 application is a publisher and the other 2 applications are consumers attached to a single MQ Manager, and MQ.
2 consumer applications are pulling off the messages and processing them. The consumer application on the separate server receives a null payload. I have confirmed that its seems to be an issue having multiple application server instances attached to MQ. Confirmed by deploying the publisher on server 2 with consumer 2, then consumer 1 fails.
Question:
Has anyone tried attaching multiple MDB applications deployed on separate server instances bind to one Queue Manager and one MQ?
Specifications:
Websphere 7, EJB 3.0 MDB's,Transactions turned off,Queue in a queue installed on another machine.
Goal:
Distributed computing, scaling up against large number of messages.
I'm thinking this is a configuration issue but not 100% sure where to look. I had read that you could use MQLink, but I don't see why I would need to use service bus integration.
Supporting Doucmentation:
[MQ Link][1
UPDATE: I fixed the problem and it was a related to a combination of class loader issue with a duplicate classes. See Solution Notes below I added.
EDIT HISTORY:
- Clarified specifications, clarified question and added overall goal.
- reference notes to solution.
Has anyone tried attaching multiple MDB applications deployed on
separate server instances bind to one Local MQ?
Multiple MDB applications deployed on separate servers, connecting to one Queue Manager(but different queue) is a normal scenario and we have it everywhere and all applications work fine.
But, I suspect what you are doing is: Multiple MDB applications deployed on separate servers, connecting to one Queue Manager and listening to the same queue.
In this case one message will be received by one consumer only.
You need to create separate queue for each application and create subscriptions for each for the topic being published by your publisher.
Addition:
I suspect, for load balancing the problem you may be facing is that, when your first application gets the message, it doesn't issue a commit. So, there will be an uncommited message in the queue, which may be stopping your other application from getting message from the queue. When your first application finishes its processing, it issues a commit, but then again it is ready to pick the message and hence it again issues a get.
In my architecture, we have implemented load balancing using multiple queue managers like below:
You create 3 queue managers, say GatewayQM, App1QM and App2QM.
Keep the three queue managers in the same cluster.
Create an alias queue(shared in cluster) in GatewayQM and ask your putting app to put message in the gateway queue.
Now create one local cluster queue in each of App1QM and App2QM. Read from these queues via your applications App1 and App2 respectively.
This implementation provides you better security and serves a perfect load balancer.
This specific problem was caused by a code issue and the combination of class loading being set to "Parent First" in the Websphere console. On one node it would work and the other nodes in a cluster would fail, I think this was caused by the "Parent First" setting.
More important, in terms of my configuration in a cluster binding multiple active specifications to a single MQ to provide distributed computing is a correct solution.
However "points" due go to "nitgeek" solution references above if you are looking for a extremely high volume solution. Its important to understand that a single MQ can have a very high depth and takes a lot to fully utilize one. My current configuration is a good starting point for quick configuration and distributed processing using Multiple MDB's.