What is Command level in MQ Series? - ibm-mq

In Websphere MQ series , command level for a queue manager is 701. What does it actually specify ?

WebSphere products use a "[version].[release].[modification].[Fix Pack]" naming convention. For example 7.0.1.6 is the current release specified down to the Fix Pack level.
Fix packs are limited to bug fixes and very limited non-disruptive functionality enhancements.
Modifications can include functionality enhancements but no API changes. For example the Multi-Instance Queue Manager was introduced in 7.0.1.
Releases may introduce significant new function and limited API changes but are highly forward and backward compatible withing the same version.
Versions encapsulate a core set of functionality. Changes at this level may sacrifice some backward compatibility in trade for significant new functionality. For example, WMQ Pub/Sub was moved from Message Broker to base MQ in the V7 release.
Since administrative functionality does not change in Fix Packs but may change at the Modification level, compatibility with administrative tools is based on the queue manager's Command Level.
There is an old but still useful TechNote which described this when the numbering conventions were adopted for WMQ.

It displays the major version number of WMQ - e.g. 530,600,700,701. Despite being 'only' a .0.1 increment, WMQ 7.0.1 gets a new major version number due to a number of internal changes (e.g. multi-instance QMs), although WMQ 6.0.1.x and 6.0.2.x were both CMDLEVEL 600

Command level, although similar to the V.R.M.F, it not exactly the same thing. The Command level is used to allow configuration applications to know what commands (and attributes within those commands) will be understood by the command server.
The first thing any configuration application should do is discover the PLATFORM and CMDLEVEL of the queue manager. Then that application can determine which commands/attributes it would be acceptable to send to that queue manager.
It is possible that CMDLEVEL could be increased in the service stream. Then the V.R.M.F. would not necessarily match the CMDLEVEL. This would happen if some new external attributes were added in the service stream, so queue managers without that patch would not understand them, but queue managers with the patch would. How does an application determine what to send? Well, the CMDLEVEL would determine that and so would have to be upped by the patch.

Related

Automatic reconnect in case of network failures

I am testing .NET version of ZeroMQ to understand how to handle network failures. I put the server (pub socket) to one external machine and debugging the client (sub socket). If I stop my local Wi-Fi connection for seconds, then ZeroMQ automatically recovers and I even get remaining values. However, if I disable Wi-Fi for longer time like a minute, then it just gets stuck on a frame waiting. How can I configure this period when ZeroMQ is still able to recover? And how can I reconnect manually after, say, several minutes? How can I understand that the socket is locked and I need to kill/open again?
Q :" How can I configure this ... ?"
A :Use the .NET versions of zmq_setsockopt() detailed parameter settings - family of link-management parameters alike ZMQ_RECONNECT_IVL, ZMQ_RCVTIMEO and the likes.
All other questions depend on your code.
If using blocking-forms of the .recv()-methods, you can easily throw yourself into unsalvageable deadlocks, best never block your own code ( why one would ever deliberately lose one's own code domain-of-control ).
If in a need to indeed understand low-level internal link-management details, do not hesitate to use zmq_socket_monitor() instrumentation ( if not available in .NET binding, still may use another language to see details the monitor-instance reports about link-state and related events ).
I was able to find an answer on their GitHub https://github.com/zeromq/netmq/issues/845. Seems that the behavior is by design as I got the same with native zmq lib via .NET binding.

JMS Activation spec on Liberty: "WAS_EndpointInitialState" full profile equivalent property?

We are migrating some apps from WAS full profile to WAS Liberty profile.
Some apps have MDBs and need JMS Activation Specs definitions connected to MQ.
In order to enforce strict FIFO ordering of messages in a cluster, we set the "WAS_EndpointInitialState" property to "INACTIVE" on those Activation Specs to tell WAS full profile to not start the Activation Spec on startup. When the cluster starts, we start (ie "resume") the activation on one server only.
Q: How to achieve this with Liberty (v16.0.x) ?
I don't see an equivalent parameter within the "properties.wmqJms" stanza.
Thanks
Liberty doesn't have an equivalent parameter/capability for activation specs.
You can open a request for enhancement here:
https://www.ibm.com/developerworks/rfe/?PROD_ID=544
In case it helps during the meantime, a crude way of simulating the capability is to start the server with the jmsActivationSpec elements commented out, and make configuration updates to uncomment as you want them activated.
Unfortunately as-is (with v16.0.0.3 and the current beta version), it is not possible to deploy an application with MDBs in production due to a serious lack on functionalities in Liberty profile (JMS Activations).
When using the jmsActivationSpec+ properties.wmqJms stanzas, it is impossible:
to configure the activation to stop after x failed tentatives to consume the message. Liberty tries to consume the message forever without any notification!!
to start the activation in an inactive state on startup., so it is impossible to enforce the FIFO paradigm on a Q when deployed in a cluster (or collective or other form of cluster)
Those are already captured in the following RFE:
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=95885
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=95794
https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=88543
For us it's a clear no-go to move to WebSphere Liberty profile for those reasons
This is way too late for the OP, but in case someone comes here looking for a current answer.
Liberty / Open Liberty now offer (as of 18.0.0.1) such a function, which you can enable via the autoStart attribute, e.g.:
<jmsActivationSpec autoStart="false" id="myJMSActSpec"/>
See here for a quick example of how you would use the EndpointControl MBean and/or the server resume CLI command to start message delivery into the server.

IBM MQ Log write integrity

In the Queue manager object we have a parameter under log section to define the log write integrity. What is the difference between SingleWrite, DoubleWrite and TripleWrite in IBM MQ log write integrity ? Please explain in detail.
LogWriteIntegrity is all about how the queue manager logger writes partial 4KB pages. Unless you are absolutely certain that your file system provides atomically written pages under all circumstances you should leave it at the default setting of TripleWrite. The option to set anything other than TripleWrite only exists because of a possible performance enhancement, however since partial pages should be rare with a queue manager with a good amount of concurrent work going on, it's not a big area for performance improvement, and a better way to improve the performance of your queue manager is to increase concurrency rather than the risks associated with changing this setting.
There is a very useful blog post from MQ Development that you should read. You can find it here: LogWriteIntegrity.... should I pick SingleWrite or TripleWrite?

Websphere MQ 7.1 Automatic Startup

What is the best way to startup automatically Websphere MQ v7.1 queue managers during system startup? I see that there is a SupportPac for it, just want to make sure this is the right one. We have MQ running over 64-bit Linux. Thanks.
Yes, SupportPac MSL1 is the correct one for Linux.
Some other UNIX flavors can use this SupportPac as-is or with modifications. For Windows, specify the QMgr to be started automatically and the WMQ service will start it.
Update 26 Sep 2017
Responding to comments from byteborg, I checked and it seems IBM have removed SupportPac MSL1 for some reason, even from the list of withdrawn SupportPacs.
As it happens I'm at MQTC this week and lots of IBMers from Hursley Lab are here so I'll ask then to restore it or put it on Github. If they are able to do so, the internal review process to make that happen is extensive so it won't happen soon. If that doesn't work, I'll see about getting permission to host it myself. Stay tuned.
Update 1 Oct 2017
While at MQTC, Mark Taylor, the MQ Architect from IBM Hursley Labs, explained the removal of MSL1. Basically, "it didn't work" according to Mark. Instead, IBM have provided guidance in the form of an MQDev blog post Managing queue manager startup and shutdown on Linux with systemd. The post does pretty much what the headline says, and describes how to use systemd to start/stop MQ. Please refer to that post for details.

WebSphereMQ PCFMessageAgent / PCFAgent - Is it Thread Safe?

I am implementing a monitoring and administrative MQ API using the WebSphereMQ java PCF (Program Control Format) library. What I would like to know is if the PCFAgent and/or the PCFMessageAgent classes are thread safe. The documentation does not make it clear [to me].
If not, then I have 2 choices:
Create a pool of agents
Create (and disconnect) agents on demand.
Any insight into this issue is appreciated.
Cheers.
The important information you seek is probably on this page:
http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp?topic=%2Fcom.ibm.mq.csqzaw.doc%2Fja11160_.htm
The main issue you will see is that the MQQueueManager object (that you either pass in, or is created for you) cannot really do 2 things at once on a single connection.
So if you have one Agent sitting on a get-with-wait waiting for a response to a big query (saying getting full details for thousands of queues) nothing else can be done using that connection until the reply comes back.
Connect/Disconnect are the biggest overhead when talking to MQ, so if you need multiple threaded access I would go with option 1 otherwise you'll pay a big penalty in performance having to wait for connect each time.

Resources