I have a project using Spring Integration + RabbitMQ. We're still in early development, so we're rapidly changing the topology of our integration architecture, including the RabbitMQ configuration.
We're also trying to follow continuous deployment, with a hands-free deployment.
I have a <rabbit:admin /> element declared in my spring config, which nicely takes care of adding new exchanges or queues.
However, I find that it fails when I'm deploying an update that changes the configuration of an existing exchange / queue.
Recently, a couple of deployments have failed because:
We switched out a Direct Queue to a FanOut exchange
We changed the declared TTL for messages on a direct queue.
In both instances, there was a change required to existing configuration, instead of just creating a new instance. These updates weren't applied, causing startup to fail.
For both, the fix was simple - delete the offending resources, restart the app, and <rabbit:admin /> kicks in a replaces them with the correct definition.
However, in a production system, we can't be doing that. Also, that's not currently scripted as part of our deployment, making continual deployment more cumbersome.
What tools or strategies are available for a continual deployment strategy that can handle updates to a RabbitMQ topology?
A way I've heard of doing it is to just create a new exchange and add new bindings to the existing queue. In this case you can move over the publishers to the next exchange and the consumers just consume from the same queue. Once everyone moves over you can drop the old queue, possibly with recreating and moving back to the previous name.
With queue changes this is more difficult as you will likely get duplicate messages if you create a new queue with the new settings and bind to the same exchanges. If this is done in concert with a new exchange (with the same config as the existing one) then you can prevent duplicate messages.
For any critical systems that can't sustain a deleted queue I'm more in favor of making a new cluster and moving all clients over to the new, correctly configured cluster. Instead of making a new cluster you could split up the existing cluster and fix one node, wipe the old one, and join it to the new node.
I've taken to managing exchange/queue configurations in Chef so that this process is a bit easier, there is no need to be careful about the order in which publishers and consumers connect to new nodes.
Those are the best I've seen/heard of. Backwards incompatible AMQP changes are similar to DB migrations in that regard. Adding compatible changes is easy to automate but incompatible changes require some care.
The upcoming 1.2 release (currently at milestone 1 - 1.2.0.M1) now has an option on the RabbitAdmin to log configuration problems (instead of failing to initialize) ignore-declaration-exceptions.
It won't change the existing configuration, but it will allow the application to initialize while logging warnings.
Related
We have multiple application environments (development, QA, UAT, etc) that need to connect to fewer provider environments through MQ. For example, the provider only has one test (we'll call it TEST1) environment to which all of the client application environments need to interact. It is imperative that each client environment only receives MQ responses to the messages sent by that respective environment. This is a high volume scenario so correlating message IDs has been ruled out.
Right now TEST1 has a queue set up and is functional, but if one of the client app's environments wants to use it the others have to be shut off so that messaging doesn't overlap.
Does MQ support a model having multiple clients connect to a single queue while preserving the client-specific messaging? If so, where is that controlled (i.e. the channel, queue manager, etc)? If not, is the only solution to set up additional queues for each corresponding client?
Over the many years I have worked with IBM MQ, I have gone back and forth on this issue. I've come to the conclusion that sharing a queue just makes life more difficult. Queues should be handed out like candy on Halloween. If an application team says that they have 10 components to their application then the MQAdmin should give them 10 queues. To the queue manager or server or CPU or hard disk, there is no difference in resource usage.
Also, use an MQ naming standard that makes sense and is easy to apply security to. i.e. for HR (Human Resource) department
HR.PAYROLL.SALARY
HR.PAYROLL.DEDUCTIONS
HR.PAYROLL.BENEFITS
HR.EMPLOYEE.DETAILS
HR.EMPLOYEE.REVIEWS
etc...
You could use a selector such as MQGET(where applname="myapp") or based on a specific user-defined property assuming the sender populates such a property but that's likely to be worse performance than any retrieval by msgid or correlid. Though you've not given any information to demonstrate that get-by-correlid is actually problematic.
And of course any difference between a test and production environment - whether it involves code or configuration - is going to be very risky.
You would not normally share a single destination queue between multiple different application types - multiple queues is far more standard.
I am using mulesoft ESB with Anypoint studio for a project. In one of my flows I am using one-way message exchange pattern to dispatch from VM (persistence file store VM connector) to JMS, both xa transaction enabled to avoid losing messages.
Consider a scenario where we send a message every time user updates his/her last name to ESB. For example, let's say user changes last name to 'A', but quickly changes to 'B', so final result is expected to be 'B'.
1) Is it likely that message 'B' gets processed before message 'A' in my case? and thus last name being set to 'A' instead of 'B'?
2) How do I avoid that apart from using 'request-response' MEP?
3) Is there a way to write unit tests for making sure order of messages being processed is maintained from VM (one-way, xa enabled) to JMS (one-way, xa enabled)?
4) How do I go about testing that manually?
Thank you in advance. Any pointers/help will be appreciated.
It's not likely, since your system would normally react way quicker than a user can submit requests. However, that may be the case during a load peak.
To really ensure message order, you really need a single bottleneck (a single instance/thread) in your solution to handle all requests. That is, you need to make sure your processing strategy in Mule is synchronous and that you only have a single consumer on the VM queue. If you have a HA setup with multiple Mule servers, you may have potential to get messages out of order. In that case, and if the user initially is connected using HTTP, you can get around most of the problem using a load balancer with a sticky session strategy.
A perhaps more robust and scalable solution is to make sure the user submits it's local timestamp on each request with high resolution. Then you can make sure to discard any "obsolete" updates when storing the information into a database. However, that is not in the mule VM/JMS layer, but rather in the database.
For testability - no, I don't think there is a truly satisfying way to be 100% sure messages won't come out of order during any condition by just writing integration tests or performing manual tests. You need to verify the message path theoretically to make sure there is no part where one message can bypass another.
Issue:
Having multiple consumer applications active specification attached to a single MQ on distributed VM servers is causing a null payload in a MQ Message.
Note: See notes at the bottom. No issue with mq.
Details:
I have 3 Websphere applications deployed across 2 VM servers. 1 application is a publisher and the other 2 applications are consumers attached to a single MQ Manager, and MQ.
2 consumer applications are pulling off the messages and processing them. The consumer application on the separate server receives a null payload. I have confirmed that its seems to be an issue having multiple application server instances attached to MQ. Confirmed by deploying the publisher on server 2 with consumer 2, then consumer 1 fails.
Question:
Has anyone tried attaching multiple MDB applications deployed on separate server instances bind to one Queue Manager and one MQ?
Specifications:
Websphere 7, EJB 3.0 MDB's,Transactions turned off,Queue in a queue installed on another machine.
Goal:
Distributed computing, scaling up against large number of messages.
I'm thinking this is a configuration issue but not 100% sure where to look. I had read that you could use MQLink, but I don't see why I would need to use service bus integration.
Supporting Doucmentation:
[MQ Link][1
UPDATE: I fixed the problem and it was a related to a combination of class loader issue with a duplicate classes. See Solution Notes below I added.
EDIT HISTORY:
- Clarified specifications, clarified question and added overall goal.
- reference notes to solution.
Has anyone tried attaching multiple MDB applications deployed on
separate server instances bind to one Local MQ?
Multiple MDB applications deployed on separate servers, connecting to one Queue Manager(but different queue) is a normal scenario and we have it everywhere and all applications work fine.
But, I suspect what you are doing is: Multiple MDB applications deployed on separate servers, connecting to one Queue Manager and listening to the same queue.
In this case one message will be received by one consumer only.
You need to create separate queue for each application and create subscriptions for each for the topic being published by your publisher.
Addition:
I suspect, for load balancing the problem you may be facing is that, when your first application gets the message, it doesn't issue a commit. So, there will be an uncommited message in the queue, which may be stopping your other application from getting message from the queue. When your first application finishes its processing, it issues a commit, but then again it is ready to pick the message and hence it again issues a get.
In my architecture, we have implemented load balancing using multiple queue managers like below:
You create 3 queue managers, say GatewayQM, App1QM and App2QM.
Keep the three queue managers in the same cluster.
Create an alias queue(shared in cluster) in GatewayQM and ask your putting app to put message in the gateway queue.
Now create one local cluster queue in each of App1QM and App2QM. Read from these queues via your applications App1 and App2 respectively.
This implementation provides you better security and serves a perfect load balancer.
This specific problem was caused by a code issue and the combination of class loading being set to "Parent First" in the Websphere console. On one node it would work and the other nodes in a cluster would fail, I think this was caused by the "Parent First" setting.
More important, in terms of my configuration in a cluster binding multiple active specifications to a single MQ to provide distributed computing is a correct solution.
However "points" due go to "nitgeek" solution references above if you are looking for a extremely high volume solution. Its important to understand that a single MQ can have a very high depth and takes a lot to fully utilize one. My current configuration is a good starting point for quick configuration and distributed processing using Multiple MDB's.
What are the cases a queue manager can loose its connectivy to repository in cluster encironment?
I have an environment where a queue manager is losing its connectivity to repository often and i need to refresh the cluster to fix this and to re-establish communication with other queue manager in the cluster.
Our cluster has 100 queue managers and we have 2 repositories in it.
There are a few different issues that can cause this. One is if there are explicitly defined CLUSSDR channels pointing to a non-repository QMgr. This causes repository messages to arrive at the non-repos QMgr which can cause its amqrrmfa repository process to die. Another is that there have been a few APARS (such as this one) which can lead to that process dieing. The solutions, respectively, are to fix the configuration issues or to apply the latest Fix Pack. Another issue, less commonly seen, is that a message to a new QMgr will error out before the new QMgr can resolve to the local QMgr. In this case, the REFRESH doesn't actually cause the remote QMgr to resolve, it just provides time for the resolution to complete.
Debugging this involves isolating the possible causes. Check that amqrrmfa is running. Check that all non-repository QMgrs have one and ONLY one explicitly defined CLUSSDR channel. Verify that all repositories have one and ONLY one explicitly defined CLUSSDR to each other repository. If overlapping clusters are used make sure to NOT overlap the channels. This means avoiding channel names like TO.QMGR and prefer names like CLUSTER.QMGR. Verify this by insuring channels do not use the CLUSNL attribute and use the CLUSTER attribute instead. Finally, reconcile the objects in both repositories and the non-repository by issuing DIS CLUSQMGR(*) and DIS QCLUSTER(*). The repositories should have identical object inventories. If that's wrong then there's the problem. The non-repository should have an entry for every QMgr it has previously talked to.
One thing I have seen in the past was that an administrator had scheduled a REFRESH CLUSTER. His thinking was that this was something they needed to do to fix the cluster so why not run it on a regular basis? So he scheduled it to run daily. Then each night it made the QMgr forget about the other QMgrs in the cluster and the first time an app resolved a remote QMgr each day there was a flurry of repository traffic. This caused enough of a delay that there were a few 2087 errors each morning. Not that you would do such a thing. :-)
I'm working on updating an existing Mule configuration and the task is to enhance it to route messages to different endpoints depending on some properties of the messages, therefore it would be nice to have some pros and cons on the two options I have at hand:
Add properties on the message, using the "message-properties-transformer" transformer which is later used by a "filtering-router" to single out the message and put it on the correct endpoint. This option allows me to use a single queue for all destinations.
Create one queue for each destination and thus instead of adding some property for later routing, I just put on on the right queue at once. I.e. this option would mean one queue per destination.
Any feedback would be welcome. Is there any "best practices" with regards to this?
I've had a great deal of success with using your first approach with a filtering-router. It reduces cohesion between your message producers and consumers. It forms a valuable abstraction, so any service can blindly drop messages within the generic "outbox".
We've come to depend on mule for filtering and routing messages so much so that we have a dedicated cluster of hardware to do only this. Using mule I was able to get far greater performance and not have to maintain connections to all queues.
The down side will be having to very carefully maintain your messaging object version globally, and having to keep a set of transformers on hands to accept and convert from different versions if you plan to upgrade only a portion of your infrastructure.
thanks, matt