What is the use of a server conn in Websphere MQ and why do we go for it.
What is the difference between client conn and server conn.
In some respects these are two opposite things, but they need to match to make a client connection to a queue manager. Its quite a generic topic but fortunately there is lots of useful documentation about this in google / IBM knowledgebase e.g. https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.con.doc/q016480_.htm
As a queue manager, if you are going to let clients connect into you, you need to be able to provide some configuration details (heartbeat intervals, max message sizes, user exits) - these are configured on a SVRCONN channel
As an application, if you want to connect into a queue manager via the client bindings (usually to go to another machine), you need some information about the configuration to use and these are configured on a CLNTCONN channel.
The application 'provides' a CLNTCONN channel, and once the connection is made, an equivalent SVRCONN channel is looked up, and the configuration values are negotiated and the connection made.
An application can 'provide' a CLNTCONN channel at least 3 common ways...
- As part of an MQSERVER environment variable
- Via a client channel table (MQCHLLIB/MQCHLTAB environment variables)
- During an MQCONNX call it can provide the channel details
More details here:
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q027440_.htm
Related
I am looking for detailed description on how IBM MQ Client or listener get the messages from MQ Server when new messages are placed on MQ Queue or Topic.
How the connection between MQ Client and MQ Server are created?
Does MQ Client initiates the connection with Server or does server initiates the connection to its consumer?
In case we have connection pool defined on MQ Client, how client knows that it has to create more connections with Server as the messages are increasing on Server? How Client know about the messages on Server?
Is there a communication from Server to Client which tells client that new messages have arrived?
I am looking for these details not the details on how this is setup or how to setup MQ Channels or Listeners. I am looking for how it works behind the scene.
If someone can point me to the right direction or documentation, it would be great.
It's hard to speak definitively about how the IBM WebSphereMQ client & server work since they're closed source, but based on my experience with other messaging implementations I can provide a general explanation.
A JMS connection is initiated by a client to a server. A JMS client uses a javax.jms.ConnectionFactory to create a javax.jms.Connection which is the connection between the client and server.
Typically when a client uses a pool the pool is either filled "eagerly" (which means a certain number of connections are created when the pool is initialized to fill it to a certain level) or "lazily" (which means the pool is filled with connections one-by-one as clients request them from the pool). If a client requests a connection from the pool and if all the connections in the pool are being used and the maximum size of connections allowed by the pool hasn't been reached then another connection will be created. If the pool has reached its maximum allowed size (i.e no more connections can be created) then the client requesting a connection will have to wait for another client to return their connection to the pool at which point the pool will then give it to the waiting client.
A JMS client can find out about messages on the server in a few different ways.
If the JMS client wants to ask the server occasionally about the messages it has on a particular queue it can create a javax.jms.Consumer and use the receive() method. This method can wait forever for a message to arrive on the queue or it can take a timeout parameter so that if a message doesn't arrive in the specified timeout the call to receive() will return.
If the JMS client wants to receive a message from a particular queue as soon as the message arrives on the queue then it can create a javax.jms.MessageListener implementation and register it on the queue. When such a listener is registered on a queue then when a message arrives on the queue the server will send the message to the listener. This is sometimes referred to as a "callback" since the server is "calling back" to the client.
The first thing you should do is take a course on JMS/IBM MQ or go to the new IBM conference called: Integration Technical Conference
Ok, now to your questions:
How the connection between MQ Client and MQ Server are created?
You simply issue the createQueueConnection method of QueueConnectionFactory class and specify the credentials.
QueueConnection conn = cf.createQueueConnection("myUserId", "myPwd");
Does MQ Client initiates the connection with Server or does server initiates the connection to its consumer?
MQ client application starts the connection - always.
In case we have connection pool defined on MQ Client, how client knows that it has to create more connections with Server as the messages are increasing on Server? How Client know about the messages on Server?
It is up to the team's architect or lead developer to understand message flow and message patterns. Hence, they will know what to set the pool count at. Also, lots and lots of testing too. Some client applications will only require a pool count of 10 whereas other applications may need a pool count of 50 because it is a heavy flow.
Is there a communication from Server to Client which tells client that new messages have arrived?
You use the createReceiver method of the QueueSession class to retrieve a message. Set a timeout value for the createReceiver method rather than continuously polling the queue manager.
Again, some training on the use of JMS/IBM MQ is strongly recommended.
I have a local MQ which my IIB connects to in client mode (i.e. not as a trusted application). I've set check client connection security on in the QM and now the IIB can't connect because it doesn't send a password and it's sending the wrong username (by default it uses the user that the process starts with). I've seen lots of documentation around setting dbparms mq::*. I could be wrong that but that only seems to affect the MQ Input and Output nodes ? Not the actual broker and it's config manager connections to MQ?
However, I've tried setting those values so that all client connection to my QMGR get a user/passwd but it still comes across as failing and I can see in the MQ logs that it's trying to connect using the userid that the IIB process was started with (and presumably without a password).
So, how do I get IIB to ALWAYS send a user/passwd to MQ when connecting the node/config mgr to the QM using client connections??
Clarification:
I have set mq::MQ -u -p and still the node attempts to connect to the QMGR using the ID that the MQSI process is started with and not the -u param. I have no execution groups and (of course) no flows in my broker so this can only be a core IIB component that's attempting the connection.
According to the IBM Integration Bus v10.0.0.10 Knowledge Center page "Connecting to a secured WebSphere MQ queue manager" you can set this in three ways:
On each MQ Node by specifying a Security identity property.
For all MQ connections to a named queue manager
For all MQ connections.
The order of which ID will be used is the same as above, so if you have a ID setup for all queue managers, you can override it for a specific queue manager or a specific MQ Node.
If you have a queue manager you are already connecting to called for example IIBQM, you could specify the following command so that all connections to that queue manager would use the specified username and password.
mqsisetdbparms integrationNodeName -n mq::QMGR::IIBQM -u username -p password
The KC page tells how to set it all three ways. If you have any specific questions please update your question by clicking edit and add more details and I can update my answer.
Hurrah - I've worked this out !
Although, I had not enabled chcklocal or chckclnt MQ, the fact that I had a idpwldap authinfo set meant that MQ was going to LDAP to find out who the user was that I was logging in with (presumably so that it could check what group permissions it had). So, I had to put my local user into LDAP and set its group.
This got my broker working (with no execution groups or flows). Once I deployed my simple mqinput and MQ output node flow it failed due to authorisations using the same ID. I could then see that it was binding locally and not as a client (which i had first considered). Phew - all done. So, to review: the answer was to put the user id that the mqsi bip/bipbroker process runs under into LDAP. Then give various MQ permissions so that the broker NODE and it's MQ flow NODES could connect to MQ correctly and put/get etc.
thanks for your help - and maybe this will help someone else in the future when someone puts on MQ security and they have a local QM with IIB.
I'm trying to configure a clustered websphere application server that connects to a clustered MQ.
However, the the information I have is details for two instances of MQ with different host names, server channels and queue manager which belongs to the same MQ cluster name.
On the websphere console, I can see input fields for hostname, queue manager and server channel, I cannot find anything that I can specify multiple MQ details.
If I pick one of the MQ detail, will MQ clustering still work? If not, how will I enable MQ clustering given the details I have?
WebSphere MQ clustering affects the behavior of how queue managers talk amongst themselves. It does not change how an application connects or talks to a queue manager so the question as asked seems to be assuming some sort of clustering behavior that is not present in WMQ.
To set up the app server with two addresses, please see Configuring multi-instance queue manager connections with WebSphere MQ messaging provider custom properties in the WAS v7 Knowledge Center for instructions on how to configure a connection factory with a multi-instance CONNAME value.
If you specify a valid QMgr name in the Connection Factory and the QMgr to which the app connects doesn't have that specific name then the connection is rejected. Normally a multi-instance CONNAME is used to connect to a multi-instance QMgr. This is a single highly available queue manager that can be at one of two different IP addresses so using a real QMgr name works in that case. But if the QMgrs to which your app is connecting are two distinct and different-named queue managers, which is what you described, you should specify an asterisk (a * character) as the queue manager name in your connection factory as described here. This way the app will not check the name of the QMgr when it gets a connection.
If I pick one of the MQ detail, will MQ clustering still work? If not,
how will I enable MQ clustering given the details I have?
Depends on what you mean by "clustering". If you believe that the app will see one logical queue which is hosted by two queue managers, then no. That's not how WMQ clustering works. Each queue manager hosting a clustered queue gets a subset of messages sent to that queue. Any apps getting from that queue will therefore only ever see the local subset.
But if by "clustering" you intend to connect alternately to one or the other of the two queue managers and transmit messages to a queue that is in the same cluster but not hosted on either of the two QMgrs to which you connect, then yes it will work fine. If your Connection Factory knows of only one of the two QMgrs you will only connect to that QMgr, and sending messages to the cluster will still work. But set it up as described in the links I've provided and your app will be able to connect to either of the two QMgrs and you can easily test that by stopping the channel on the one it connects to and watching it connect to the other one.
Good luck!
UPDATE:
To be clear the detail provide are similar to hostname01, qmgr01,
queueA, serverchannel01. And the other is hostname02, qmgr02, queueA,
serverchannel02.
WMQ Clients will connect to two different QMgrs using a multi-instance CONNAME only when...
The channel name used on both QMgrs is the exactly the same
The application uses an asterisk (a * character) or a space for the QMgr name when the connection request is made (i.e. in the Connection Factory).
It is possible to have WMQ connect to one of several different queue managers where the channel name differs on each by using a Client Connection Definition Table, also known as a CCDT. The CCDT is a compiled artifact that you create using MQSC commands to define CLNTCONN channels. It contains entries for each of the QMgrs the client is eligible to connect to. Each can have a different QMgr name, host, port and channel. However, when defining the CCDT the administrator defines all the entries such that the QMgr name is replaced with the application High Level Qualifier. For example, the Payroll app wants to connect to any 1 of 3 different QMgrs. The WMQ Admin defines a CCDT with three entries but uses PAY01, PAY02, and PAY03 for the QMgr names. Note this does not need to match the actual QMgr names. The application then specifies the QMgr name as PAY* which selects all three QMgrs in the CCDT.
Please see Using a client channel definition table with WebSphere MQ classes for JMS for more details on the CCDT.
Is MQ cluster not similar to application server clusters?
No, not at all.
Wherein two-child nodes are connected to a cluster. And an F5 URL will
be used to distribute the load to each node. Does not WMQ come with a
cluster url / f5 that we just send message to and the partitioning of
messages are transparent?
No. The WMQ cluster provides a namespace within which applications and QMgrs can resolve non-local objects such as queues and topics. The only thing that ever connects to a WebSphere MQ cluster is a queue manager. Applications and human users always connect to specific queue managers. There may be a set of interchangeable queue managers such as with the CCDT, but each is independent.
With WAS the messaging engine may run on several nodes, but it provides a single logical queue from which applications can get messages. With WMQ each node hosting that queue gets a subset of the messages and any application consuming those messages sees only that subset.
HTTP is stateless and so an F5 URL works great. When it does maintain a session, that session exists mainly to optimize away connection overhead and tends to be short lived. WMQ client channels are stateful and coordinate both single-phase and two-phase units of work. If an application fails over to another QMgr during a UOW, it has no way to reconcile that UOW.
Because of the nature of WMQ connections, F5 is never used between QMgrs. It is only used between client and QMgr for connection balancing and not message traffic balancing. Furthermore, the absence or presence of an MQ cluster is entirely transparent to the application which, in either case, simply connects to a QMgr to get and./or put messages. Use of a Multi-Instance CONNAME or a CCDT file makes that connection more robust by providing multiple equivalent QMgrs to which the client can connect but that has nothing whatever to do with WMQ clustering.
Does that help?
Please see:
Clustering
How Clusters Work
Queue manager groups in the CCDT
Connecting WebSphere MQ MQI client applications to queue managers
I'm a beginner on WebSphere MQ, I was working on MQ 6 and it was working fine, but now I've installed MQ 7.1 and when I try to create a new Queue Manager I can do it But it can't connect and it gives me the following error :
Do you have any idea about that? Thank you :)
You can look up any WebSphere MQ error code if either the WebSphere MQ Client or Server are installed using the mqrc command. In this case:
C:\Users\MUSR_MQADMIN>mqrc 2059
2059 0x0000080b MQRC_Q_MGR_NOT_AVAILABLE
The 2059 usually indicates that the listener is not running or the queue manager is down. There's a different error code if the listener is running and the QMgr name is wrong and another one if the connection is made to the right QMgr but the channel name is wrong. Sometimes you can get a 2059 if the channel was closed at the server side by an exit but since you didn't mention any exits, I'm assuming in this case that its listener problem.
Hopefully by now you are defining a listener object rather than using inetd or the runmqlsr command. Defining an object and setting it to start and stop under QMgr control is the most reliable way to configure it.
Once you get past the 2059, you should be aware that as of WMQ V7.1, the queue managers are secure by default and won't accept any remote client connections unless you explicitly authorize them. This is the opposite of the behavior of V6 where on a newly defined queue manager running a listener, anyone with a TCP route to it could administer it and remotely execute OS code as the mqm user. So I expect that the next problem you run into will be 2035 errors.
I've been told this means more work for the WMQ administrator. The only case in which that's true is if the V6 or earlier queue manager had been configured without security. If the tasks to secure a V7.0 QMgr are compared to the tasks to provision access on a v7.1 and higher QMgr are compared, provisioning access turnds out to be easier. However if you liked the V7.0 behavior, you can always alter the QMgr to disable CHLAUTH rules. Needless to say, leaving security enabled is highly encouraged.
To debug security errors, alter the QMgr to enable authorization events using the runmqsc command ALTER QMGR AUTHOREV(ENABLED). Next, download and install SupportPac MS0P into WebSphere MQ Explorer. Then when you do get a security error, use WebSphere MQ Explorer to look at the queue. Right-click on the queue and select the option to parse the event messages. This will tell you in excruciating detail all the information you need to debug the authorization error.
Finally, if you wish to read up on the new security features, go to t-rob.net/links and look at the conference presentations there. There are also some articles indexed if you scroll down.
In the screen-shot, I see hostname "127.0.0.1" and port # 1414. If it is a local queue manager then connect directly to it.
Also, each queue manager MUST use a unique port number. If you had it working with WMQ v6 queue manager, is this the same queue manager? If not, then make sure each queue manager uses a different port number (i.e. 1415, 1416, etc...)
I got same problem. but i resolved this by :
1. created a listener manually (define lstr(lstr1) port(xxxx) control(qmgr)
2. setmqaut mcauser('mqm').
i kind of don't understand when to use the MQ client connection channel. From my understanding, when client trying to connect MQ server, it can be completed by defining the channel object with server connection channel value directly in application code. Therefore, if so, then why do we need to make use of such client connection channel?
Please help explain to me in detail. Thanks very much
A Server Connection Channel is used by clients to connect to a queue manager.
You don't really use a client connection channel to connect to queue manager. A client connection channel defines the connection parameters required to connect to a queue manager for example queue manager name, connection name, SSL etc. These channel definitions are stored in channel definition table (CCDT) files. CCDT files are used by client applications through MQCHLLIB and MQCHLTAB environment variables.
This link and another has little more details.
In older versions of WebSphere MQ, a Client Channel Definition Table was used to specify SSL parameters and for failover so the application could select from several equivalent queue managers at connection time. The CCDT file is a compiled artifact and the DEFINE CHL(channel name) CHLTYPE(CLNTCONN) command is what generates the entries in the CCDT file. So you would only use the CLNTCONN channel type if you wanted to create a CCDT file.
Newer versions of WebSphere MQ expose the CCDT fields in the MQCONNX API and the reconnection parameters are in the CONNAME parameter and the client.ini file. Although these have made the CCDT file obsolete for newer applications, the functionality is still required for commercial and legacy applications. IBM has not announced that CCDT functionality is deprecated and it is in V7.5 which was just released so that functionality will remain for the foreseeable future.
What is a channel?
A channel is a logical communication link between a WebSphere® MQ client and a WebSphere MQ server, or between two WebSphere MQ servers. A channel has two definitions: one at each end of the connection. The same channel name must be used at each end of the connection, and the channel type used must be compatible.
WebSphere® MQ uses two different types of channels:
Message Channel
MQI Channel
A message channel, which is a unidirectional communications link between two queue managers. WebSphere MQ uses message channels to transfer messages between the queue managers. To send messages in both directions, you must define a channel for each direction.
A message channel is a one-way link. It connects two queue managers by using message channel agents (MCAs). Its purpose is to transfer messages from one queue manager to another. Message channels are not required by the client server environment.
An MQI channel, which is bidirectional and connects an application (MQI client) to a queue manager on a server machine. WebSphere MQ uses MQI channels to transfer MQI calls and responses between MQI clients and queue managers
Source