Ist there a workaround for JMS / Websphere MQ message segmentation? - jms

So far I didn't find a solution to read segmented messages with IBM's JMS implementation (without message grouping). See also: is IBM MQ Message Segmentation possible using JMS?
Ist there any workaround for a JMS client yet to receive segmented messages?
For e.g. is it possible to configure a "MQ server component" to reassemble segmented messages into one single message for the JMS client? Other ideas?

If the total reassembled message stays within 100MB (i.e. the maximum allowed message size), then you could have an interim queue with a non JMS MQ API application getting and reassembling the messages and then putting the large reassembled message onto a queue that the JMS application gets from. This would retain the smaller sized messages while they traverse though the MQ network, and are only large (read inefficient) messages at the last point before the application retrieves them.
However, if the total reassembled message is larger than 100MB, which may be the case if segments are in use then the above solution will not help.
In fact, if the total reassembled message is larger than 100MB then you can't send it over a client connection anyway, in which case you'll need to make th application local to the queue manager.
If you are local to a queue manager, then an API exit that changes the underlying MQGET call made by the JMS layer may also be a possibility. You can only use this if you have a local queue manager because client side API Exits are only supported in the C Client. You could cross the SVRCONN channel regardless of the type of client at the other end of the socket, but you cannot send a message greater than 100MB over the client channel so if the total reassembled message is greater than the channel's MAXMSGL then it can't be sent.
Related Reading
Writing API Exits
API Exit Reference

Related

Best protocol to send huge file over unstable connection

Use case: Need to send a huge files (multiple of 100 MB) over the queue from one server to another.
At present, am using Activemq Artemis server to send huge file as ByteMessage with help of input stream over tcp protocol with reattempt interval as -1 for unlimited fail over retry. Here the main problem is consumer endpoints connection will be mostly unstable I.e disconnect from network because of mobility nature.
So while send a file in queue, if the connection is dropped and reconnectd broker should resume from where the transaction interrupted (e.g) while tranfer 300 mb of file to consumer queue ,assume that a portion of 100mb is transferred to consumer queue server , then the connection is dropped and reconnected after a while, then process should resume from transferring remaining 200 mb not the whole 300mb again.
My question is which one is the best protocol(tcp, Stomp and openwire) and best practice (blobmessage, bytemessage input stream)to achieve it in ActiveMQ Artemis
Apache ActiveMQ Artemis supports "large" messages which will stream over the network (which uses Netty TCP). This is covered in the documentation. Note: this functionality is only for "core" clients. It doesn't work with STOMP or OpenWire. It also doesn't support "resume" functionality where the message transfer will pick-up where it left off in the case of a disconnection.
My recommendation would be to send the message in smaller chunks in individual messages that will be easier to deal with in the case of network slowness or disconnection. The messages can be grouped together with a correlation ID or something and then the end client can take the pieces of the message and assemble them together once they are all received.

MQ Input/Output count increasing when Datapower client is connect using MQ front side handler

I am using MQ 7.5.0.2 and Datapower client IDG7
When MQ send messages to Datapower, Datapower receive those messages using MQ front side handlers and also same way it do send messages using Backend URL
But the problem I am facing it when ever Datapower connects to MQ, Queue Input/Output count increases to (10 ~20) and remains same and the Handle state is INACTIVE.
When I see queue details using below commands it is displaying as below
display qstatus(******) type(handle)
QUEUE(********) TYPE(HANDLE)
APPLDESC(WebSphere MQ Channel)
APPLTAG(WebSphere Datapower MQClient)
APPLTYPE(SYSTEM) BROWSE(NO)
CHANNEL(*****) CONNAME(******)
ASTATE(NONE) HSTATE(INACTIVE)
INPUT(SHARED) INQUIRE(NO)
OUTPUT(NO) PID(25391)
QMURID(0.1149) SET(NO)
TID(54)
URID(XA_FORMATID[] XA_GTRID[] XA_BQUAL[])
URTYPE(QMGR)
Can any one help me in this.It only clearing when ever i restart the queue manager but I dont want to restart the qmgr every time.
HSTATE in INACTIVE state indicates "No API call from a connection is currently in progress for this object. For a queue, this condition can arise when no MQGET WAIT call is in progress.". This is likely to happen if the application(DP in this case) opened the queue and then not issuing any API calls on the opened object. Pid 25391 - is this for an amqrmppa process? Is DP expected to consume messages on this queue continuously?

How does the DEADQ work for SVRCONN channel in MQ?

i've a question about the DEADQ in MQ. I know that DEADQ has been used when the msg cannot be delived into the target queue, for example queue full or put inhibited like this. But if the client applicatoin connects to QMGR through SVRCONN channel and that target queue is full right now, will the msg sent by client application go to DEADQ or just that put operation would return with failure saying queue full?
if it works as the latter, does it mean the DEADQ not being used for client-server mode, like through SVRCONN channel?
Thanks
The DLQ is used by QMgr-to-QMgr channels because at that point the message has been entrusted to WMQ to deliver and/or persist as necessary. There is no option to directly notify the sending application in the API call that something went wrong. Best the QMgr can do is to send back a report message, if the app has requested it, and if the app specified a reply-to queue.
When an application is connected over a client channel the QMgr can tell the application during the PUT API call "hey, the destination queue is full," and let the application decide the appropriate course of action. Usually that "correct action" does NOT include putting the message onto the DLQ. In fact, routing this type of message to a secondary queue is VERY unusual. Conventional wisdom says it's better for the app to raise an alert than to provision overflow queues.
The one exception to this is when using JMS classes. These will move a poison message (one where the backout count exceeds the BOTHRESH value on the queue) to a backout queue. If the backout queue is not specified or the app is not authorized to it or if it is full, the app will then try to use the DLQ. Again, allowing an app to directly put messages to the DLQ is very unusual and not considered good practice, but it can be done.

Build durable architecture with Websphere MQ clients

How can you create a durable architecture environment using MQ Client and server if the clients don't allow you to persist messages nor do they allow for assured delivery?
Just trying to figure out how you can build a salable / durable architecture if the clients don't appear to contain any of the necessary components required to persist data.
Thanks,
S
Middleware messaging was born of the need to persist data locally to mitigate the effects of failures of the remote node or of the network. The idea at the time was that the queue manager was installed locally on the box where the application lives and was treated as part of the transport stack. For instance you might install TCP and WMQ as a transport and some apps would use TCP while others used WMQ.
In the intervening 20 years, the original problems that led to the creation of MQSeries (Now WebSphere MQ) have largely been solved. The networks have improved by several nines of availability and high availability hardware and software clustering have provided options to keep the different components available 24x7.
So the practices in widespread use today to address your question follow two basic approaches. Either make the components highly available so that the client can always find a messaging server, or put a QMgr where the application lives in order to provide local queueing.
The default operation of MQ is that when a message is sent (MQPUT or in JMS terms producer.send), the application does not get a response back on the MQPUT call until the message has reached a queue on a queue manager. i.e. MQPUT is a synchronous call, and if you get a completion code of OK, that means that the queue manager to which the client application is connected has received the message successfully. It may not yet have reached its ultimate destination, but it has reached the protection of an MQ Server, and therefore you can rely on MQ to look after the message and forward it on to where it needs to get to.
Whether client connected, or locally bound to the queue manager, applications sending messages are responsible for their data until an MQPUT call returns successfully. Similarly, receiving applications are responsible for their data once they get it from a successful MQGET (or JMS consumer.receive) call.
There are multiple levels of message protection are available.
If you are using non-persistent messages and asynchronous PUTs, then you are effectively saying it doesn't matter too much whether the messages reach their destination (although they generally will).
If you want MQ to really look after your messages, use synchronous PUTs as described above, persistent messages, and perform your PUTs and GETs within transactions (aka syncpoint) so you have full application control over the commit points.
If you have very unreliable networks such that you expect to regularly fail to get the messages to a server, and expect to need regular retries such that you need client-side message protection, one option you could investigate is MQ Telemetry (e.g. in WebSphere MQ V7.1) which is designed for low bandwidth and/or unreliable network communications, as a route into the wider MQ.

Where is a message under syncpoint control stored prior to COMMIT?

With WebSphere MQ, where is a message addressed to a remote queue manager and PUT under syncpoint control stored until MQCMIT is issued?
Messages that are addressed to a remote queue manager resolve to a transmit queue. Which transmit queue they resolve to depends on how the message will be sent to the remote QMgr. The message will resolve locally to either a user-defined transmit queue for a SDR or SVR channel, or it will resolve to SYSTEM.CLUSTER.TRANSMIT.QUEUE for a clustered channel.
For any message that is put under syncpoint, the message is written to the transaction logs and, if the message is persistent, to the queue file. The queue depth increases to reflect that the message is in the queue but the message is not made available to other programs (such as the channel agent) until the COMMIT occurs.
So if your message is going to a clustered queue manager and you PUT under syncpoint, you will see the depth of the cluster transmit queue increase. At that point the message is at least in the transaction log and possibly also written to the queue file. As soon as the message is committed, it becomes available to the channel.

Resources