I'm trying to connect to my MQ servers. I have 2 MQ Server: server 1 and server 2 and set it as connectionNameList for mqConnectionFactory. Is there a way for MQ to connect to server 1 if server 2 failed? How can I know if MQ server is connected? I saw that there is clientReconnectOptions set to 67108864 but I'm not sure what is that.
The possible settings for ClientReconnectOptions are documented in the IBM MQ Knowledge center page CLIENTRECONNECTOPTIONS
Below is an example using setClientReconnectOptions to set it so the application can reconnect to any queue manager listing on the two host(port) combinations set in the connectionNameList.
MQQueueConnectionFactory factory = new MQQueueConnectionFactory();
factory.setQueueManager("QMNAME");
factory.setChannel("SVRCONN.CHL");
factory.setConnectionNameList("hostName1(port),hostName2(port)");
factory.setClientReconnectOptions( WMQConstants.WMQ_CLIENT_RECONNECT );
// Set the amount of time you will attempt to reconnect in seconds
factory.setClientReconnectTimeout( 43200 ); //12 hours
//default is 1800 seconds
//factory.setClientReconnectTimeout(WMQConstants.WMQ_CLIENT_RECONNECT_TIMEOUT_DEFAULT);
Note that clients will not always attempt to reconnect, see the following from the endmqm man page on Linux:
If you issue endmqm to stop a queue manager, reconnectable clients do
not try to reconnect. To override this behavior, specify either the -r
or -s option to enable clients to start trying to reconnect.
Note: If a queue manager or a channel ends unexpectedly, reconnectable
clients start trying to reconnect.
Related
The problem: MQ Explorer on a Windows Server machine with connections to a number of remote queue managers in a Linux env. I have migrated a new queue manager to the Linux env and added it as a remote queue manager in MQExploirer, but it will not connect, returning the following error:
Queue manager not available for connection - reason 2059. (AMQ4043)
Severity: 20 (Error) Explanation: The attempt to connect to the queue
manager failed. This could be because the queue manager is incorrectly
configured to allow a connection from this system, or the connection
has been broken. Response: Ensure that the queue manager is running.
If the queue manager is running on another computer, ensure it is
configured to accept remote connections.
Things I have already checked:
MQ Explorer connects to other remote qmgrs on the same machine
The configuration of the new qmgr connection on MQExplorer is identical to the preexisting ones
The new qmgr is up and running
I have a telnet connection from the windows env to the new qmgr's port
The new qmgr has the same channel and listener with the same configuration as the qmgrs that connect successfully
Most of the advice I've seen on this problem is covered by points 1 and 3. Can anyone suggest anything else I can try?
I am quite new to IBM MQ's. Mine is a multi-instance queue manager.
One instance is like fail-over.
How can I connect to them even if one of is down.
I am not sure whether my terminology is right or not?
I am trying to connect using below example now
https://raw.githubusercontent.com/ibm-messaging/mq-dev-samples/master/gettingStarted/jms/JmsPutGet.java
Instead of populating WMQ_HOST_NAME and WMQ_PORT populate WMQ_CONNECTION_NAME_LIST with a comma separated list that is in the format host1(port1),host2(port2). IBM MQ will attempt to connect to host1 first and if it fails it will attempt host2 during the initial connection attempt.
If you want the client to reconnect on failure you will need to enable mq auto reconnect like this:
cf.setClientReconnectOptions(WMQConstants.WMQ_CLIENT_RECONNECT);
cf.setClientReconnectTimeout(1800); // how long in seconds to continue to attempt reconnection before failing
In spring Documentation --> 32.6 TCP Adapters it is mentioned that we use clientMode = "true" then the inbound adapter is responsible for the connection with external server.
I have created a flow in which the TCP Adapter with client connection factory makes connection with external server the code for the flow is :
IntegrationFlow flow = IntegrationFlows.from(Tcp.inboundAdapter(Tcp.nioClient(hostConnection.getIpAddress(),Integer.parseInt(hostConnection.getPort()))
.serializer(customSerializer)
.deserializer(customSerializer)
.id(hostConnection.getConnectionNumber())).clientMode(true).retryInterval(1000).errorChannel("testChannel").id(hostConnection.getConnectionNumber()+"adapter"))
.enrichHeaders(f->f.header("CustomerCode",hostConnection.getConnectionNumber()))
.channel(directChannel())
.handle(Jms.outboundAdapter(ConnectionFactory())
.destination(hostConnection.getConnectionNumber()))
.get();
theFlow = this.flowContext.registration(flow).id(hostConnection.getConnectionNumber()+"outflow").register();
I have created multiple flow by iterating over the list of connections and
iterate the above code in for loop and register them in flowcontext with unique ID.
My clients are created successfully with no issue and then establish there connection as supported by topology.
Issue :
I have counted the number of client connection created successfully so I have counted that 7 client connection (7 Integration flow) made successfully and they initiate connection from themselves.
when I create 8th client connection (8th flow created and registered successfully) but the .clientMode(true) is not working means the client don't initiate connection itself after first failure means it try for the first time to make connection if connected successfully then no issue but in case of failure it don't retry again.
Also my other created clients i.e 7 clients connection which are created successfully they also stopped initiating connection from itself when they got disconnected.
Note: There is no issue with flow only the TCP Adapters they stop initiating the connection
The flow is created and registered successfully as there is no issue it is because when I run a control bus command #adapter_id.retryConnection() it got connected with the server.
I don't understand that what is the issue with my flow that i couldn't initiate a connection after a particular count i.e seven or is there limitation in creating number of clients.
One possibility is the taskScheduler's thread pool is exhausted - that shouldn't happen with the above configuration, but it depends on what else is in the application. Take a thread dump (e.g. jstack) to see what the taskScheduler threads are doing.
See the documentation for information about how to configure the threads in the scheduler. However, if it solves it, you should really figure out what task(s) are using scheduler threads for long tasks.
Also turn on DEBUG logging to see if it provides any clues.
I create a queue manager like QMTEST in IBM WebSphere MQ Explorer. I want to connect to a remote queue manager (remote ip address). I followed these steps:
add remote Queue manager
Queue manager name: QMTEST [Next]
Host Name or ip Address: X.X.X.X(remote ip) [Finish]
But I couldn't connect. I got this error message 'Could not establish connection to the queue manager-reason 2538.(AMQ4059)'. What can I do?
The four digit number in your error message is an MQRC (MQ Reason Code). This number gives you more information about what went wrong. You can look it up in Knowledge Center.
MQRC_HOST_NOT_AVAILABLE (2538)
There is a list of possible things that could cause this error. My guess is it is likely to be the first one, you have not started a listener on the queue manager, since you do not mention doing that in your question details.
You should also read the following link which is some basic details on how to connect to a remote queue manager. You appear to have the MQ Explorer side sorted, but perhaps not the queue manager side.
Setting up the server using the command line
Please ensure the listener is running on the remote queue manager side. The default MQ listener port is 1414. If the listener is running then check the queue manager error logs for any connection errors from the MQ explorer.
Are you sure the qmgr and its listener are running and that you have a SYSTEM.ADMIN.SVRCONN channel? That is the server-connection channel used for remote administration of a queue manager. This technote may be helpful.
Is this on a modern Windows or Linux server? If so, did you open the port (i.e. 1414) in the firewall?
Sometimes when we create a Queue manager remote administration objects are not created that's why we get such errors because it is unable to find these objects. To create them right click on the queue Manager, Select Remote Administration objects and create them and start the listener.
I experienced this same error and the queue was configured correctly.
I was using Eclipse and switched to the MQ Explorer setup available from IBM web page.
After that, I was totally able to see the queues and everything I was supposed to see.
I am trying to put a message into Websphere MQ queue from an Orchestration which is deployed on Cast Iron Live. I have used secure connector since the orchestation is deployed on Cast Iron. When I am trying to execute the flow, it fails and the message is not placed in MQ queue. The below are the errors:
Error while trying to call remote operation execute on Secure Connector for activity
com.approuter.module.mq.activity.MqPut and Secure Connector LocalSecureConnector,
error is Unable to put message on queue null. MQ returned error code 2538.
Unable to put message on queue null. MQ returned error code 2538.
Fault Name : Mq.Put.OperationActivityId : 163
Message: Unable to put message on queue null. MQ returned error code 2538.
Activity Name:Put MessageFault Time: 2015-07-15T05:40:29.711Z
Can someone please help me resolve this. Please let me know if any further details are required.
Here are the details:
Cast Iron flow is deployed on Cast Iron Cloud i.e Cast Iron Live
MQ is running on-premise
The port I am trying to connect is 1414.
Have a secure connector running on the machine where MQ is installed.
MQ version is 8.
In Cast Iron flow, I am using an MQ connector, by giving the hostname where MQ is running, port: 1414, Channel Name : SYSTEM.DEF.SVRCONN and username as mqm. Tired using my log on username, by adding it to mqm group. But this also dosent seem to work.
The return code is instructive:
2538 0x000009ea MQRC_HOST_NOT_AVAILABLE
This indicates that Cast Iron is attempting to contact MQ using a client connection and not finding a listener at the host/port that it is using.
There are a couple of possibilities here but not enough info to say which it might be. I'll explain and provide some diagnostics you can try.
The 2538 indicates an attempt to contact the QMgr has failed. This might be that, for example, the QMgr isn't listening on the configured port (1414) or that the MQ listener is not running.
The error code says the queue name is "null". The question doesn't specify which queue name the connector is configured with but presumably it's been configured with some queue name. This error code suggests the Secure Connector on the MQ server side doesn't have its configuration installed.
The Cast Iron docs advise connecting with an ID in the mqm group but do not mention that on any MQ version 7.1 or higher this is guaranteed to fail unless special provisions are made to allow the admin connection. It may be that it's actually failing for an authorization error and the connector not reporting the correct error.
If it is as simple as the listener not running, that's easy enough to fix. Just start it and make sure it's on 1414 as expected.
Next, ensure that the Secure Connector has the configuration that was created using the Cast Iron admin panel. You need to understand why the error code says the queue name is null.
Now enable Authorization Events and Channel Events in the QMgr and try to connect again. The connector on the MQ server should connect when started and if successful you can see this by looking at the MQ channel status. However, if unsuccessful, you can tell by looking at the event messages or the MQ error logs. Both of these will show authorization failures and connection attempts, if the connection has made it that far.
The reason I'm expecting 2035 Authorization Error failures is that any QMgr from v7.1 and up will by default allow an administrative connection on any channel. This is configured in the default set of CHLAUTH rules. The intent is that the MQ admin would have to explicitly provision admin access by adding one or more new CHLAUTH rules.
For reasons of security SYSTEM.DEF.* and SYSTEM.AUTO.* channels should never be used for legitimate connections. The Best Practice is to define a new SVRCONN, for example one named CAST.IRON.SVRCONN and then define a CHLAUTH rule to allow the administrative connection.
For example:
DEFINE CHL(CAST.IRON.SVRCONN) CHLTYPE(SVRCONN) TRPTYPE(TCP) REPLACE
SET CHLAUTH('CAST.IRON.SVRCONN') TYPE(ADDRESSMAP) +
ADDRESS('127.0.0.1') +
USERSRC(MAP) MCAUSER('mqm') +
ACTION(REPLACE)
SET CHLAUTH('CAST.IRON.SVRCONN') TYPE(BLOCKUSER) +
USERLIST('*NOBODY') +
WARN(NO) ACTION(REPLACE)
The first statement defines the new channel.
The next one allows the connections from 127.0.0.1 which is where the Secure Connector lives. (Presumably you installed the internal Secure Connection on the same server as MQ, yes?) Ideally the connector would use TLS on the channel and instead of IP filtering the CHLAUTH rule would filter based on the certificate Distinguished Name. This rule is not nearly so slective and allows anyone on the local host to be an MQ administrator by using this channel.
The last statement overrides the default CHLAUTH rule which blocks *MQADMIN with a new rule that blocks *NOBODY but just for that channel.