JMS Listener Not Picking Up Message From the Queue - jms

I am planning to do code change for an existing application which has a JMS listener.
To test whether the listener works on my local server, I deploy the application to my localhost and shutdown other containers that running the same application.
But my local listener won't pick up any message. It is confirmed that other containers work fine and can pick up and process new messages in the queue.
Can you think of any possible cause of this?

Way too general, too many missing points...but some things to look at:
if the message queue is on a different server, can you ping it from the local device? could be that development environment can't see production server, perhaps
does a netstat -n show the correct port number, you should see a remote port with the port on which the message provider is listening itself
can you verify that the messaging provider sees you as a consumer? I use activemq, I can look at the management console, dive into a specific queue, and view active consumers, most providers will have something similar
are you running in an identical environment? Running a listener in a JEE environment where the queue is a jndi reference might be different running in a debugger where you need the actual queue name
any JMS filtering going on, where the filter for your local envionrment doesn't match up with what's already on the queue?
any transaction manager stuff that may be getting in the way?
Again, just throwing stuff to see what sticks to the wall, but these are the really obvious things.

Thanks Scott for answering my question.
I finally find that Eclipse somehow created another container and my listener was deployed to it. That's why I cannot find it working in my current container.

Related

Can you check to see if an IBM MQ topic is up and available through a Java application before attempting to create a connection?

I would like to add some conditional logic to our Java application code for attempting to create a JMS Topic Connection. I have seen problems in the past stemming from attempting to create a connection when the MQ server had been restarted or was currently down. One improvement I added was to check for the quiescent state, and another was to increase the timer before attempting reconnection to our durable topic queue.
Is there a way to confirm with the MQ server/topic/channel that it is up and running and a connection request can safely be made?
The best way to confirm that a queue manager (and the channel you are using to connect to the queue manager) is up and running is to attempt to connect to it.
If your connection attempt fails, you will get an MQ Reason code telling you exactly why. This is a much better way to confirm than any administrative command, because it also confirms that your application, and it's security context is correct and able to connect to the queue manager. It is completely possible to have an up-and-running queue manager but an application that is not yet correctly configured to use it. So connect from the application and if it works, the queue manager is up-and-running.
Your comment about having an increased timer before attempting to reconnect after a failure is well made. It doesn't help anyone if you hammer the queue manager with lots of repeated and close together connection attempts until it is ready to accept your connection, but still anything that is going to test the availability of the queue manager needs to ultimately connect to it, so very simply, just connect.

MQ System w/ brokers - Any way to examine everything

I have inherited an IMB MQ (V6) system that has multiple brokers. Is there a way to explore everything succinctly ?
i.e. I know what queue managers are running, so without "runmqsc"ing each and every manager, how can i find broker names, listeners, etc ?
There is the Explorer running but again points to knowing the manager and port to have it connect successfully.
For MQ, the dmpmqcfg command can be useful to output your configuration info to a file.
For the broker, try the mqsilist command to list installed brokers and their associated resources.
this webpage may be of help to you:
Performing health checks for WebSphere Message Broker
http://www.ibm.com/developerworks/websphere/library/techarticles/0801_cui/0801_cui.html
To work out which queue managers are running on your machine, use thedspmq command. Then you'll know each queue manager and can runmqsc to each one, or point MQ Explorer to each one, or whatever you need to do next.

JMS with clustered nodes

I have two clustered managed servers running on Weblogic, and seperate JMS server1 and server2 are running on each managed server. The problem is in application properties file, we only hardcoded and pass JMS server1 JNDI name to the application. So both applications running on each node actually only uses one fixed JMS server, which is not truly distributed and clustered. If JMS server 1 is down, the whole application will be down.
My question is how to let application dynamically find JMS server in above senario? Can you please point me a direction? Thanks!
It's in the Weblogic docs at: http://docs.oracle.com/cd/E14571_01/web.1111/e13738/best_practice.htm#CACDDFJD
Basically you created a comma separated list of servers and the JMS connection logic should be automatically able to handle to case when one of the servers is down:
e.g.
t3://hostA:7001,hostB:7001
When you use a property like jms.jndi.provider.url=t3://hostA:31122,hostA:31124
it tells wls to connect to either hostA:31122 or hostA:31124.
Note your JMS client is connected to only one Host at any given time.
when you shutdown hostA the connection between JMS client and server is cut abruptly resulting in an exception, your code will have to handle this exception gracefully and attempt to connect to WLS again periodically to ensure it connects to hostB.
WLS internally will round robin the request if more than 1 instance of the JMS client is running.
When using MDB as JMS client and deploying it to a cluster and using such a url 1 mdb instance would connect to one host and the other instance would connect to another host. MDB also inherently has the ability to reconnect periodically to the JMS destination.
A easy solution to your problem could be to
1) Set the jms.jndi.provider.url=t3://hostA:31122,hostA:31124
2) Have 2 instance of the JMS client code running, so one will connect to port 31122 and other to 31124
3) Set Forward-Delay on the JMS Queue so that message dont remain in queue without getting consumed for long and get forwarded to the other queue which has an active consumer.
I am updating my progress here instead of adding more comments. I have tested using a standalone JMS client by changing properties file from t3://hostA:7001 to t3://hostA:7001,hostB:7001 for JMS provider. The failover is automatically handled by WLS. No code change. The exception I got above is caused by using wlclient.jar, it is working after it changed to wlfullclient.jar.
I followed this link to generate wlfullclient.jar.
Thanks everyone!

Can't connect Websphere MQ Queue Manager

I'm a beginner on WebSphere MQ, I was working on MQ 6 and it was working fine, but now I've installed MQ 7.1 and when I try to create a new Queue Manager I can do it But it can't connect and it gives me the following error :
Do you have any idea about that? Thank you :)
You can look up any WebSphere MQ error code if either the WebSphere MQ Client or Server are installed using the mqrc command. In this case:
C:\Users\MUSR_MQADMIN>mqrc 2059
2059 0x0000080b MQRC_Q_MGR_NOT_AVAILABLE
The 2059 usually indicates that the listener is not running or the queue manager is down. There's a different error code if the listener is running and the QMgr name is wrong and another one if the connection is made to the right QMgr but the channel name is wrong. Sometimes you can get a 2059 if the channel was closed at the server side by an exit but since you didn't mention any exits, I'm assuming in this case that its listener problem.
Hopefully by now you are defining a listener object rather than using inetd or the runmqlsr command. Defining an object and setting it to start and stop under QMgr control is the most reliable way to configure it.
Once you get past the 2059, you should be aware that as of WMQ V7.1, the queue managers are secure by default and won't accept any remote client connections unless you explicitly authorize them. This is the opposite of the behavior of V6 where on a newly defined queue manager running a listener, anyone with a TCP route to it could administer it and remotely execute OS code as the mqm user. So I expect that the next problem you run into will be 2035 errors.
I've been told this means more work for the WMQ administrator. The only case in which that's true is if the V6 or earlier queue manager had been configured without security. If the tasks to secure a V7.0 QMgr are compared to the tasks to provision access on a v7.1 and higher QMgr are compared, provisioning access turnds out to be easier. However if you liked the V7.0 behavior, you can always alter the QMgr to disable CHLAUTH rules. Needless to say, leaving security enabled is highly encouraged.
To debug security errors, alter the QMgr to enable authorization events using the runmqsc command ALTER QMGR AUTHOREV(ENABLED). Next, download and install SupportPac MS0P into WebSphere MQ Explorer. Then when you do get a security error, use WebSphere MQ Explorer to look at the queue. Right-click on the queue and select the option to parse the event messages. This will tell you in excruciating detail all the information you need to debug the authorization error.
Finally, if you wish to read up on the new security features, go to t-rob.net/links and look at the conference presentations there. There are also some articles indexed if you scroll down.
In the screen-shot, I see hostname "127.0.0.1" and port # 1414. If it is a local queue manager then connect directly to it.
Also, each queue manager MUST use a unique port number. If you had it working with WMQ v6 queue manager, is this the same queue manager? If not, then make sure each queue manager uses a different port number (i.e. 1415, 1416, etc...)
I got same problem. but i resolved this by :
1. created a listener manually (define lstr(lstr1) port(xxxx) control(qmgr)
2. setmqaut mcauser('mqm').

How to send a message from Server A to Server B using MSMQ?

How do I setup a message queue that automatically sends all it's messages to another server?
I'm working on a proof of concept for a system that needs to run on multiple servers, writing to local message queues, then have a central service on another server running that reads its local queue to pick up all the messages from the other servers.
From what I've read I believe this is possible, but I'm not seeing how to set it up...
Thanks
When your application send a message to a remote computer, the msmq service actually write the message to a local queue ( temporary outgoing queue). So practically the behaver of msmq is exactly what you want. Can you elaborate more about your scenario?
Update to comment: There is one problem. You can't create a remote queue.

Resources