Spring Boot connecting to IBM MQ Cloud - spring-boot

I'm trying to send a message to a IBM MQ queue from my spring boot service. I could send/receive the message from the IBM MQ installed on my laptop.
However when I replaced the configuration to connect to the IBM MQ Cloud, this is not working
Configuration is as follows:
getting the following error
Note:
I could telnet into the host & port
I tried with the following for Username & Password
With APIKey & App Name
My IBM Username & Password

It seems to be a known issue:
The connection may be broken for a number of different reasons. The 2009 return code indicates that something prevented a successful connection to the Queue Manager. The most common causes for this are the following are:
A firewall that is terminating the connection
An IOException that causes the socket to be closed
An explicit action to cause the socket to be closed by one end
The queue manager is offline
The maximum number of channels allowed by the queue manager are open
A configuration problem in the Queue Connection Factory (QCF)
Can you please try suggestions from here

Usually, the most common reason for Error Code 2009: MQRC_CONECTION_BROKEN is that your JMS client opens up multiple connections to the queue and they remain unclosed even though they're not being used. You then eventually run out of channels. You can increase the channel size to more than double of what is required.
Check your FFST log file generated by IBM MQ classes for JMS. It gives you detailed info on connections/errors:
First Failure Support Technology ( FFST ) files

Related

IBM Message Broker (IIB) Node can't send credentials to MQ

I have a local MQ which my IIB connects to in client mode (i.e. not as a trusted application). I've set check client connection security on in the QM and now the IIB can't connect because it doesn't send a password and it's sending the wrong username (by default it uses the user that the process starts with). I've seen lots of documentation around setting dbparms mq::*. I could be wrong that but that only seems to affect the MQ Input and Output nodes ? Not the actual broker and it's config manager connections to MQ?
However, I've tried setting those values so that all client connection to my QMGR get a user/passwd but it still comes across as failing and I can see in the MQ logs that it's trying to connect using the userid that the IIB process was started with (and presumably without a password).
So, how do I get IIB to ALWAYS send a user/passwd to MQ when connecting the node/config mgr to the QM using client connections??
Clarification:
I have set mq::MQ -u -p and still the node attempts to connect to the QMGR using the ID that the MQSI process is started with and not the -u param. I have no execution groups and (of course) no flows in my broker so this can only be a core IIB component that's attempting the connection.
According to the IBM Integration Bus v10.0.0.10 Knowledge Center page "Connecting to a secured WebSphere MQ queue manager" you can set this in three ways:
On each MQ Node by specifying a Security identity property.
For all MQ connections to a named queue manager
For all MQ connections.
The order of which ID will be used is the same as above, so if you have a ID setup for all queue managers, you can override it for a specific queue manager or a specific MQ Node.
If you have a queue manager you are already connecting to called for example IIBQM, you could specify the following command so that all connections to that queue manager would use the specified username and password.
mqsisetdbparms integrationNodeName -n mq::QMGR::IIBQM -u username -p password
The KC page tells how to set it all three ways. If you have any specific questions please update your question by clicking edit and add more details and I can update my answer.
Hurrah - I've worked this out !
Although, I had not enabled chcklocal or chckclnt MQ, the fact that I had a idpwldap authinfo set meant that MQ was going to LDAP to find out who the user was that I was logging in with (presumably so that it could check what group permissions it had). So, I had to put my local user into LDAP and set its group.
This got my broker working (with no execution groups or flows). Once I deployed my simple mqinput and MQ output node flow it failed due to authorisations using the same ID. I could then see that it was binding locally and not as a client (which i had first considered). Phew - all done. So, to review: the answer was to put the user id that the mqsi bip/bipbroker process runs under into LDAP. Then give various MQ permissions so that the broker NODE and it's MQ flow NODES could connect to MQ correctly and put/get etc.
thanks for your help - and maybe this will help someone else in the future when someone puts on MQ security and they have a local QM with IIB.

How to authenticate remote MQ machine, Error when trying to connect

Access not permitted. You are not authorized to perform this operation. (AMQ4036)
Severity: 10 (Warning)
Explanation: The queue manager security mechanism has indicated that the userid associated with this request is not authorized to access the object.
I am using same version of MQ in both machines and when I try to connect the above error occurs using Windows machines for connection.
1st, make sure you are NOT using a SYSTEM.* channel. Create a channel for your application.
2nd, add the appropriate permissions (via setmqaut) to allow the UserId to connect to the queue manager and access the queues.
IBM Support Technote "WMQ 7.1 / 7.5 / 8.0 / 9.0 queue manager RC 2035 MQRC_NOT_AUTHORIZED or AMQ4036 or JMSWMQ2013 when using client connection as an MQ Administrator" has a good write up on diagnosing and resolving this issue.
If you want more specific help, to start with please provide the following details by editing and adding them to your question. Adding them as comments does not help much since the number of characters are limited and you have no way to format them in a comment.
Version of IBM MQ installed on client and queue manager
Errors in the queue manager's AMQERR01.LOG that happen at the same time as the error you receive.
SVRCONN channel name that you are connecting too.

OnXMSException listener not invoked when network disconnected

I'm using IBM XMS v 9.0 .NET C# client library to connect IBM MQ.
Once connection are established, assigned MessageListener and OnXMSException.
Have set XMSC_WMQ_CLIENT_RECONNECT_TIMEOUT = 30.
We are getting messages on MessageListener and everything works fine.
When I disconnect the network after successful connection, I won’t get any exception delivered to OnXMSException listener method.
My intension is, if MQ connection is no longer valid / active , I should get back error ASAP, so that I can establish a new connection quickly to start reading the messages to avoid queue backlog.
Is XMSC_WMQ_CLIENT_RECONNECT_TIMEOUT, is the right one ? or any other setting exist for this?
I am trying network disconnect because sometimes we have noted that even when MQ connection in not active on MQ server, the client would get no CONNECTION BROKEN error. But sometime it works.)

Can't connect Websphere MQ Queue Manager

I'm a beginner on WebSphere MQ, I was working on MQ 6 and it was working fine, but now I've installed MQ 7.1 and when I try to create a new Queue Manager I can do it But it can't connect and it gives me the following error :
Do you have any idea about that? Thank you :)
You can look up any WebSphere MQ error code if either the WebSphere MQ Client or Server are installed using the mqrc command. In this case:
C:\Users\MUSR_MQADMIN>mqrc 2059
2059 0x0000080b MQRC_Q_MGR_NOT_AVAILABLE
The 2059 usually indicates that the listener is not running or the queue manager is down. There's a different error code if the listener is running and the QMgr name is wrong and another one if the connection is made to the right QMgr but the channel name is wrong. Sometimes you can get a 2059 if the channel was closed at the server side by an exit but since you didn't mention any exits, I'm assuming in this case that its listener problem.
Hopefully by now you are defining a listener object rather than using inetd or the runmqlsr command. Defining an object and setting it to start and stop under QMgr control is the most reliable way to configure it.
Once you get past the 2059, you should be aware that as of WMQ V7.1, the queue managers are secure by default and won't accept any remote client connections unless you explicitly authorize them. This is the opposite of the behavior of V6 where on a newly defined queue manager running a listener, anyone with a TCP route to it could administer it and remotely execute OS code as the mqm user. So I expect that the next problem you run into will be 2035 errors.
I've been told this means more work for the WMQ administrator. The only case in which that's true is if the V6 or earlier queue manager had been configured without security. If the tasks to secure a V7.0 QMgr are compared to the tasks to provision access on a v7.1 and higher QMgr are compared, provisioning access turnds out to be easier. However if you liked the V7.0 behavior, you can always alter the QMgr to disable CHLAUTH rules. Needless to say, leaving security enabled is highly encouraged.
To debug security errors, alter the QMgr to enable authorization events using the runmqsc command ALTER QMGR AUTHOREV(ENABLED). Next, download and install SupportPac MS0P into WebSphere MQ Explorer. Then when you do get a security error, use WebSphere MQ Explorer to look at the queue. Right-click on the queue and select the option to parse the event messages. This will tell you in excruciating detail all the information you need to debug the authorization error.
Finally, if you wish to read up on the new security features, go to t-rob.net/links and look at the conference presentations there. There are also some articles indexed if you scroll down.
In the screen-shot, I see hostname "127.0.0.1" and port # 1414. If it is a local queue manager then connect directly to it.
Also, each queue manager MUST use a unique port number. If you had it working with WMQ v6 queue manager, is this the same queue manager? If not, then make sure each queue manager uses a different port number (i.e. 1415, 1416, etc...)
I got same problem. but i resolved this by :
1. created a listener manually (define lstr(lstr1) port(xxxx) control(qmgr)
2. setmqaut mcauser('mqm').

AMQ9504: A protocol error was detected for channel

I'm unable to connect remotely from WebSphere Application Server with Queue Manager at WebSphere MQ. Anyhow it get connected to Queue Manager from WAS that is installed on same machine. I'm using version 7.5 of WebSphere MQ and version 7.0 of WebSphere Application Server.
While attempting to connect WAS remotely to Queue Manager following error messages were logged.
Error Message from WebSphere MQ:
1/30/2013 21:12:09 - Process(3624.6) User(MUSR_MQADMIN)
Program(amqrmppa.exe)
Host(KHILT-269) Installation(Installation1)
VRMF(7.5.0.0) QMgr(QM.TEST)
AMQ9504: A protocol error was detected for channel 'TEST_CHANNEL'. EXPLANATION: During communications with the
remote queue manager, the channel program detected a protocol error.
The failure type was 11 with associated data of 0. ACTION: Contact the
systems administrator who should examine the error logs to determine
the cause of the failure.
Error Message at WebSphere Application Server:
A connection could not be made to WebSphere MQ for the following
reason: CC=2;RC=2009
As it can be seen from logs, I have created Queue Manager as QM.TEST and channel as TEST_CHANNEL. The listener port defined for the Queue Manager is 1417 along with protocol TCP.
I did lot of google but didn't find any appropriate solution. I appreciate any help in this regard.
Thanks in adv, KAmeer
I had a similar issue where I have WAS 7 and WMQ 7.5. I was able to make a connection to my existing WMQ 7.0 QM but not my new WMQ 7.5 QM. Apparently there was a change to the WMQ components bundled with WAS 7 after the initial release 7.0.0.0. After updating the resource adapter I was able to make a successfull connection to both queue managers.
The queue manager generates protocol error and terminates the connection immediately when it receives unexpected TSH flow from the client. As a result the client receives 2009 error. Technically, low level MQ client will be able communicate with higher version MQ queue manager and vice versa unless there are known restrictions and/or there is a MQ defect/APAR. The error message indicates the queue manager is running at MQ 7500 and this is MQ base 7.5 version. It is recommended upgrading the queue manager to the latest fixpack available to rule out any known problems. You could also try disabling shared conversion on the SVRCONN(i.e. setting SHARECNV to 0) channel and check whether it workarounds the problem until the problem is resolved.
Open a PMR with IBM as this sounds like a bug.
the cause for this is mq 7 client cannot talk to mq 7.5, the client needs to use mq 7.5 jar files
I had this problem. In my case was the mq library that was doing an MQGET with an infinite loop, so the lib was locked on the mqget while i called the kill and generated an event and tried to disconnect while the get was still running. As the mqget does not support unlocking via signal, i had to change the code to not stay infinite on get and add some flags on the kill command so the app could detect that it was time to die, when it returned from the get.

Resources