Build durable architecture with Websphere MQ clients - ibm-mq

How can you create a durable architecture environment using MQ Client and server if the clients don't allow you to persist messages nor do they allow for assured delivery?
Just trying to figure out how you can build a salable / durable architecture if the clients don't appear to contain any of the necessary components required to persist data.
Thanks,
S

Middleware messaging was born of the need to persist data locally to mitigate the effects of failures of the remote node or of the network. The idea at the time was that the queue manager was installed locally on the box where the application lives and was treated as part of the transport stack. For instance you might install TCP and WMQ as a transport and some apps would use TCP while others used WMQ.
In the intervening 20 years, the original problems that led to the creation of MQSeries (Now WebSphere MQ) have largely been solved. The networks have improved by several nines of availability and high availability hardware and software clustering have provided options to keep the different components available 24x7.
So the practices in widespread use today to address your question follow two basic approaches. Either make the components highly available so that the client can always find a messaging server, or put a QMgr where the application lives in order to provide local queueing.

The default operation of MQ is that when a message is sent (MQPUT or in JMS terms producer.send), the application does not get a response back on the MQPUT call until the message has reached a queue on a queue manager. i.e. MQPUT is a synchronous call, and if you get a completion code of OK, that means that the queue manager to which the client application is connected has received the message successfully. It may not yet have reached its ultimate destination, but it has reached the protection of an MQ Server, and therefore you can rely on MQ to look after the message and forward it on to where it needs to get to.
Whether client connected, or locally bound to the queue manager, applications sending messages are responsible for their data until an MQPUT call returns successfully. Similarly, receiving applications are responsible for their data once they get it from a successful MQGET (or JMS consumer.receive) call.
There are multiple levels of message protection are available.
If you are using non-persistent messages and asynchronous PUTs, then you are effectively saying it doesn't matter too much whether the messages reach their destination (although they generally will).
If you want MQ to really look after your messages, use synchronous PUTs as described above, persistent messages, and perform your PUTs and GETs within transactions (aka syncpoint) so you have full application control over the commit points.
If you have very unreliable networks such that you expect to regularly fail to get the messages to a server, and expect to need regular retries such that you need client-side message protection, one option you could investigate is MQ Telemetry (e.g. in WebSphere MQ V7.1) which is designed for low bandwidth and/or unreliable network communications, as a route into the wider MQ.

Related

Messaging library safe for client/server crashes?

I'm evaluating some messaging libraries and protocols (e.g. ZeroMQ, WAMP). One of my main requirements is that sending messages from client to server and vice verse (two way communication) must be absolute safe with respect to client/server crashes. This means to me that e.g. the client must continue sending all not delivered messages after a spontaneous reboot. So the library should implement some kind of file based buffering. Is there anything there I can use out of the box?
[EDIT]
Some note on my use case:
In my scenario there are around 1000 clients communicating with one server. There is no direct client to client communication required. But I need a two-way communication, so both, the clients can push some data to the server and vice versa. The clients are connected via 3G mobile network. Both, client and server are written in C#. I focused on using ZeroMQ, Apache Thrift or WAMP. But one of the main requirements is to ensure asynchronous but safe messaging with respect to system crashes. So when the client starts an asynchronous data push to the server, and it will crash before the message can be delivered to the server, it is required that the client will continue sending the message after a reboot.
You might look into the Apache.org's Kafka project.
The problem is harder than it looks, and most people don't want to pay the price to make it happen.
Also, there is a UX issue with old queued up messages replaying without the user's understanding.

When to choose a remote queue design versus local queue for get/put activities

I'm trying to figure out under what conditions I would want to implement a remote queue versus a local one for 2 endpoint applications.
Consider this scenario: App A on Server A needs to send messages to App B on Server B via MQServer1.
It seems like the simplest configuration would be to create a single local queue on MQServer1 and configure AppA to put messages to the local queue while configuring AppB to get messages from the same local queue. Both AppA and AppB would connect to the same Queue Manager but execute different commands.
What sort of circumstances would require the need to install another MQ server (e.g. MQServer2) and configure a remote queue on MQServer1 which instead sends the messages from AppA over a channel to a local queue on MQServer2 to be consumed by AppB?
I believe I understand the benefit of remote queuing but I'm not sure when it's best used over a more simpler design.
Here are some problems with what you call the simpler design that you don't have with remote queuing:-
Time Independance - Server1 has to be available all the time, whereas with a remote queue, once the messages have been moved to Server B, Server A and Server 1 don't need to be online when App B wants to get its messages.
Network Efficiency - with two client applications putting or getting from a central queue, you have two inefficient network hops, instead of one efficient channel batched network connection from Server A to Server B (no need for Server 1 in the middle)
Network Problems - No network, no messages. Whereas when they are stored locally, any that have already arrived can be processed even while the network is down. Likewise, the application putting messages is also not held up by a network problem, the messages sit on the transmit queue easy to be moved, and the application can get on with the next thing.
Of course your applications should be written so that they aren't even aware of the difference, and it's just configuration changes that switch you from one design to the other.
Here we can have separate Queue Manager for both the application.Application A will send the message on to the queue defined on local Queue Manager, which in turn transmit it to the Transmission queue via defined channels (Need to do configuration for that in the QueueManager) which in turn send it to the Local queue of the Application B.

Intended use of Transmission Queue

This is very basic question about IBM WebSphere MQ V7.
Regarding the Transmission Queue, my understanding is it is only used with remote queue that resides in the same queue manager. Therefore, if I want to put message to the queue, I need to put it to remote queue.
It is like this.
App --> Remote queue --> Transmission Queue
My question is:
Is it possible to put the message directly into transmission queue like this?
App --> Transmission Queue
--Modified on 2014.03.17 --
I found a way to put message directly into transmission queue. I do not know this is a common use, but in order to do that I needed to prepend MQXQH to the message. I tried and confirmed it works. See the Infocenter reference here.
Do not ever put directly to a transmission queue. It is dangerous if you do not know what you are doing.
You should put your message to a remote queue. A remote queue is not the same as a local queue. A remote queue is simply a pointer to a queue on another queue manager.
Although it is possible to put messages directly on the XMitQ, there is considerable risk in allowing that to occur so most admins will prevent applications from directly accessing that queue. As you have found, it is possible to construct a message with the transmission queue header and behind that a normal message with the MQMD and payload. (This is, in fact, excatly how the MCA works.)
The problem here is that the QMgr does not check the values in the MQMD residing in the payload so you can put mqm as the MQMD.UserID and then address the message to the remote command queue and grant yourself admin access to that remote QMgr.
Security-conscious administrators typically use two security controls to prevent this. First, they disallow direct access to the XMitQ. That helps for outbound messages. More importantly, they set the MCAUSER of their RCVR/RQSTR/CLUSRCVR channels to a non-admin user ID that is not authorized to put messages onto any sensitive queues.
The other issue is, of course, that what you describe completely defeats WMQ's name resolution. By embedding routing into the app, you prevent the administrator from adjusting channel weights, cluster settings, failover and load distribution at the network level. Need to redistribute traffic? Redeploy the code. Not a good plan.
So for security reasons and because you paid a lot of money to get WMQ's reliability - much of which comes from dynamic addressing and name resolution features - coding apps to write directly to the XMitQ is strongly discouraged.
You should not directly be using the transmission queue. Its used by the message channel agent (MCA) as temporary storage when sending messages across to a remote queue manager.
This is distributed queuing - i.e you publish a message to Queue Manager A, and want it routed to a local queue on Queue Manager B. So you define a reference on QM-A referring to the local queue on QM-B. This reference is the 'remote queue definition'.
The remote queue definition specifies the transmission queue name. The transmission queue is bound to the MCA, which in turn knows about the remote QM.

Difference between queue manager and message broker

What is the difference between a Websphere Message Broker and a Queue Manager. I guess the queue manager puts messages in the queue, takes messages out of the queue, moves messages to backout queues etc. So what is the job of the broker?
Does it sit between the publisher and the Queue Manager or between the consumer and the Queue Manager?
Websphere MQ is a software which uses the AMQ(Asynchronous messaging protocol). You can achieve asynchronous messaging between your applications via Websphere MQ, which will make your infrastructure loosely coupled(Applications can keep working even though other applications are down in the infrastructure).
But the applications in your infrastructure may not be able to understand each others' message formats, and hence just sending the message to the target application may not be enough. You may require transformation of the message.
You can do it by writing your own program using the Websphere MQ API.
Your program should be able to do the below things:
Pick message from a specific queue (using MQGET)
Should be able to understand the message. That is say it's an XML message. Then your program must be able to parse the XML and read the
data in it.
After reading the input message you will make your output message based on the requirements.
Then you will either publish the message or put the message in some specific queue(say TargetQ), so that the target application can get
the message. Target application will then get the message either by
issuing MQGET on the TargetQ or subscribing to the topic which was
published from your application.
But writing your own program will take a lot of development time and effort and also may be a bit complex.
So, IBM provided its own software to do the job, which is "Websphere Message Broker".
WMB allows you to create programs very easily and a lot faster.
Appropriate nodes in WMB will do all above steps for you. In fact it provides lot many features than the above steps.
Websphere MQ still doesn't have an HTTP listener. But, a message broker does. It allows you to host web services and have HTTP based flows etc that too in a secure way(Supports SSL).
MQ is providing you the infrastructure for messaging: queues and topics - IBM MQ
IBM Integration Bus (formerly known as WebSphere Message Broker) allows you to apply the common EAI patterns, e.g. Routing, Transformation
Hope that helps.
Best,
Patrick
I want to add just two points: Message Broker (now IIB) includes a set of optimized and fast parsers (XML, CSV, etc) and useful mapping nodes (msg-msg, msg-db). MQ is also used for internal configuration messages coming from the Configuration Manager.
WebSphere MQ is a solution for application-to-application communication services regardless of where your applications or data reside. Whether on a single server, separate servers of the same type, or separate servers of different architecture types, WebSphere MQ facilitates communications between applications by sending and receiving message data via messaging queues. Applications then use the information in these messages to interact with Web browsers, business logic, and databases. WebSphere MQ provides a secure and reliable transport layer for moving data unchanged in the form of messages between applications but it is not aware of the content of the messages. WebSphere MQ uses a set of small and standard application programming interfaces (APIs) that support a number of programming languages, including Visual Basic, NATURAL, COBOL, Java, and C across all platforms.
WebSphere Message Broker is built to extend WebSphere MQ, and it is capable of understanding the content of each message that it moves through the Broker. Customers can define the set of operations on each message depending on its content. The message processing nodes supplied with WebSphere Message Broker are capable of processing messages from various sources, such as Java Message Service (JMS) providers, HyperText Transfer Protocol (HTTP) calls, or data read from files. By connecting these nodes with each other, customers can define linked operations on a message as it flows from one application to its destination.
Message Broker can do the following:
Matches and routes communications between services
Converts between different transport protocols
Transforms message formats between requestor and service
Identifies and distributes business events from disparate sources
Together, WebSphere MQ and WebSphere Message Broker deliver a comprehensive publish and subscribe facility, connecting Message Broker’s broad transport and format support to WebSphere MQ’s messaging backbone. WebSphere Message Broker extends the WebSphere MQ publish and subscribe functionality with advanced function such as content-based publish and subscribe by means of an enhanced Publication node. The two products share a common publish and subscribe domain for topic- and content-based operations
MQ is mainly for the transforming the messages from on system to another system.
WMB(websphere message broker) will sit between QMGR's and transforming message along with change content of the message format as per the system requirement/Business Logic implementation.
Srinu D

Websphere MQ clustering

I'm pretty new to websphere MQ, so please pardon me if I am not using the right terms. We are doing a project in which we need to setup a MQ cluster for high availability.
The client application maintains a pool of connection with the Queue Manager for subscribers and publishers. Suppose we have two Queue Managers in a cluster hosting the queues with the same names. Each of the queue has its own set of subscribers and publishers which are cached by the client application. Suppose one of the queue manager goes down, the subscribers and publishers of the queues on that queue manager will die making the objects on client application defunct.
In this case can the following scenarios taken care of?
1] When first QueueManager crashes, the messages on its queues are transferred to other queuemanager in the cluster
2] When QueueManager comes up again, is there any mechanism to restore the publishers and subscribers. Currently we have written an automated recovery thread in the client application which tries to reconnect the failed publishers and subscriber. But in case of cluster setup, we fear that the publishers and subscribers will reconnect to the other running qmanager. And when the crashed queuemanager is restored, there will be no publishers and subscribers to it.
Can anybody please explain how to take care of above two scenarios?
WMQ Clustering is an advanced topic. You must first do a good amount of read up of WMQ and understand what clustering in WMQ world means before attempting anything.
WMQ Cluster differs in many ways from the traditional clusters. Unlike the traditional clusters, say in a Active/Passive cluster, data will be shared between active and passive instances of an application. At any point in time, the active instance of application will be processing data. When the active instance goes down, the passive instance takes over and starts processing. This is not the case in WMQ clusters where queue managers in a cluster are unique and hence queues/topics hosted by those queue managers are not shared. You might have the same queues/topics in both queue managers but since queue managers are different, messages, topics, subscriptions etc won't be shared.
Answering to your questions.
1) No. Messages,if persistent, will remain in the crashed queue manager. They will not be transferred to other queue manager. Since the queue manager itself is not available nothing can be done till the queue manager is brought up.
2)No. Queue manager can't do that. It's the duty of the application to check for queue manager availability and reconnect. WMQ provides automatic client reconnection feature where in the WMQ client libraries automatically reconnect to queue manager when they detect connection broken errors. This feature is available from WMQ v7.x and above with C and Java clients. C# client supports the feature from v7.1.
For your high availability requirement, you could look at using Multi instance queue manager feature of WMQ. This feature enables an Active/Passive instances of the same queue manager running on two different machines. Active instance of the queue manager will be handling client connections while the passive instance will be in sleep mode. Both instances will be sharing data and logs. Once the active instance goes down, the passive instance becomes active. You will have access to all the persistent messages that were in the queues before the active queue manager went down.
Read through the WMQ InfoCenter for more on Multi instance queue manager.
To add to Shashi's answer, to get the most out of WMQ clustering you need to have a network hop between senders and receivers of messages. WMQ clustering is about how QMgrs talk among themselves. It has nothing to do with how client apps talk to QMgrs and does not replicate messages. In a cluster when a message has to get from one QMgr to another, the cluster figures out where to route it. If there are multiple clustered instances of a single destination queue, the message is eligible to be routed to any of them. If there is no network hop between senders and receivers, then messages don't need to leave the local QMgr and therefore WMQ clustering behavior is never invoked, even though the QMgrs involved may participate in the cluster.
In a conventional WMQ cluster architecture, the receivers all listen on multiple instances of the same queue, with the same name, spread across multiple QMgrs. The senders have one or more QMgrs where they can connect and send requests (fire-and-forget), possibly awaiting replies (request-reply). Since the receivers of the messages provide some service, I call their QMgrs "Service Provider QMgrs." The QMgrs where the senders of messages live are "Service Consumer" QMgrs because these apps are consumers of services.
The slide below is from a presentation I use on WMQ Architecture consulting engagements.
Note that consumers of services - the things sending request messages - fail over. Things listening on service endpoint queues and providing services do NOT fail over. This is because of the need to make sure every active service endpoint queue is always served. Typically each app instance holds an input handle on two or more queue instances. This way a QMgr can go down and all app instances remain active. If an app instance goes down, some other app instance continues to serve its queues. This affinity of service providers to specific QMgrs also enables XA transactionality if needed.
The best way I've found to explain WMQ HA is a slide from the IMPACT conference:
A WebSphere MQ cluster ensures that a service remains available, even though an instance of a clustered queue may be unavailable. New messages in the cluster will route to the remaining queue instances. A hardware cluster or multi-instance QMgr (MIQM) provides access to existing messages. When one side of the active/passive pair goes down, there is a brief outage on that QMgr only while the failover occurs, then the secondary node takes over and makes any messages on the queues available again. A network that combines both WMQ clusters and hardware clusters/MIQM provides the highest level of availability.
Keep in mind that in none of these configurations are messages replicated across nodes. A WMQ message always has a single physical location. For more on this aspect, please see Thoughts on Disaster Recovery.

Resources