NServiceBus DBMS connection timeout - oracle

I use Nservicebus with Oracle Queues OAQ instaed of MSMQ.
I have a problem working with a dbms server that is shutdown every day at the same time.
In particular when my nservicebus host can't get the dbms connection it starts logging on.
When the dbms is restarted my host restart or not randomly! However restarting my host everything is ok!
Another detail is that when my nservicebus host can't restart it logs a 'connection timeout message' every 15 seconds!
What's the behavior of NserviceBus when it's reading from a queue and the dbms crash? What could i do to solve this problem?
thank you,
R

I'm afraid the problem you're facing is the result of the design of your system. By having the queues in the DB, when the DB becomes unavailable, so do the queues. NServiceBus assumes that it is always able to communicate with its queues, as is the case when using a distributed/federated queuing system like MSMQ.
You can look at what some people in the community have done to combat this same problem when they were using IBM MQ (http://code.google.com/p/nservicebuswmq/) - ultimately falling back to MSMQ under those conditions and then syncing back up with MQ when it came back online.

Related

Can you check to see if an IBM MQ topic is up and available through a Java application before attempting to create a connection?

I would like to add some conditional logic to our Java application code for attempting to create a JMS Topic Connection. I have seen problems in the past stemming from attempting to create a connection when the MQ server had been restarted or was currently down. One improvement I added was to check for the quiescent state, and another was to increase the timer before attempting reconnection to our durable topic queue.
Is there a way to confirm with the MQ server/topic/channel that it is up and running and a connection request can safely be made?
The best way to confirm that a queue manager (and the channel you are using to connect to the queue manager) is up and running is to attempt to connect to it.
If your connection attempt fails, you will get an MQ Reason code telling you exactly why. This is a much better way to confirm than any administrative command, because it also confirms that your application, and it's security context is correct and able to connect to the queue manager. It is completely possible to have an up-and-running queue manager but an application that is not yet correctly configured to use it. So connect from the application and if it works, the queue manager is up-and-running.
Your comment about having an increased timer before attempting to reconnect after a failure is well made. It doesn't help anyone if you hammer the queue manager with lots of repeated and close together connection attempts until it is ready to accept your connection, but still anything that is going to test the availability of the queue manager needs to ultimately connect to it, so very simply, just connect.

queue is missing sometimes in web logic jndi tree

We are facing jndi issue.
We have four manage servers. for this servers we have 80 queues,
After restarting the server in console we have seen all queues, But after some time one of the queue is missing. In log file there is no error.While restarting that server it will appearing missed queue.
Can anyone help on this issue...

JMS Listener Not Picking Up Message From the Queue

I am planning to do code change for an existing application which has a JMS listener.
To test whether the listener works on my local server, I deploy the application to my localhost and shutdown other containers that running the same application.
But my local listener won't pick up any message. It is confirmed that other containers work fine and can pick up and process new messages in the queue.
Can you think of any possible cause of this?
Way too general, too many missing points...but some things to look at:
if the message queue is on a different server, can you ping it from the local device? could be that development environment can't see production server, perhaps
does a netstat -n show the correct port number, you should see a remote port with the port on which the message provider is listening itself
can you verify that the messaging provider sees you as a consumer? I use activemq, I can look at the management console, dive into a specific queue, and view active consumers, most providers will have something similar
are you running in an identical environment? Running a listener in a JEE environment where the queue is a jndi reference might be different running in a debugger where you need the actual queue name
any JMS filtering going on, where the filter for your local envionrment doesn't match up with what's already on the queue?
any transaction manager stuff that may be getting in the way?
Again, just throwing stuff to see what sticks to the wall, but these are the really obvious things.
Thanks Scott for answering my question.
I finally find that Eclipse somehow created another container and my listener was deployed to it. That's why I cannot find it working in my current container.

ActiveMQ crash after some times

I am using ActiveMQ 5.10.0. I have many consumer that connect through stomp connection to ActiveMQ. I have about 2000 consumer that connect to unique queue. My problem now is that ActiveMQ always crash/hang after a couple of hours and the log not showing any error. It makes me difficult to trace where is the problem. As far as I know, the new consumer can't create new connection and subscription to queue after ActiveMQ crashed. I can't open the web console too. Is there any way to improve the performance of ActiveMQ to handle many connections or anything else for performance tuning?
In order to debug to exact issue, try:
Enable debug logs of active mq
Add TransportListener to active mq connection/connection factory. Logging connection interrupt, resume and exception.
Refer http://activemq.apache.org/maven/apidocs/org/apache/activemq/transport/TransportListener.html
JMX monitoring via jconsole or jvisual vm, thread dump might help in debugging the same.
We faced similar issue in production env. where producers were able to produce but consumers were not consuming after 24-48 hrs of everything running fine. Restarting active mq lead to consumers starting consuming messages. We haven't found the exact cause/fix yet but recently added above debugging steps.

Can't connect Websphere MQ Queue Manager

I'm a beginner on WebSphere MQ, I was working on MQ 6 and it was working fine, but now I've installed MQ 7.1 and when I try to create a new Queue Manager I can do it But it can't connect and it gives me the following error :
Do you have any idea about that? Thank you :)
You can look up any WebSphere MQ error code if either the WebSphere MQ Client or Server are installed using the mqrc command. In this case:
C:\Users\MUSR_MQADMIN>mqrc 2059
2059 0x0000080b MQRC_Q_MGR_NOT_AVAILABLE
The 2059 usually indicates that the listener is not running or the queue manager is down. There's a different error code if the listener is running and the QMgr name is wrong and another one if the connection is made to the right QMgr but the channel name is wrong. Sometimes you can get a 2059 if the channel was closed at the server side by an exit but since you didn't mention any exits, I'm assuming in this case that its listener problem.
Hopefully by now you are defining a listener object rather than using inetd or the runmqlsr command. Defining an object and setting it to start and stop under QMgr control is the most reliable way to configure it.
Once you get past the 2059, you should be aware that as of WMQ V7.1, the queue managers are secure by default and won't accept any remote client connections unless you explicitly authorize them. This is the opposite of the behavior of V6 where on a newly defined queue manager running a listener, anyone with a TCP route to it could administer it and remotely execute OS code as the mqm user. So I expect that the next problem you run into will be 2035 errors.
I've been told this means more work for the WMQ administrator. The only case in which that's true is if the V6 or earlier queue manager had been configured without security. If the tasks to secure a V7.0 QMgr are compared to the tasks to provision access on a v7.1 and higher QMgr are compared, provisioning access turnds out to be easier. However if you liked the V7.0 behavior, you can always alter the QMgr to disable CHLAUTH rules. Needless to say, leaving security enabled is highly encouraged.
To debug security errors, alter the QMgr to enable authorization events using the runmqsc command ALTER QMGR AUTHOREV(ENABLED). Next, download and install SupportPac MS0P into WebSphere MQ Explorer. Then when you do get a security error, use WebSphere MQ Explorer to look at the queue. Right-click on the queue and select the option to parse the event messages. This will tell you in excruciating detail all the information you need to debug the authorization error.
Finally, if you wish to read up on the new security features, go to t-rob.net/links and look at the conference presentations there. There are also some articles indexed if you scroll down.
In the screen-shot, I see hostname "127.0.0.1" and port # 1414. If it is a local queue manager then connect directly to it.
Also, each queue manager MUST use a unique port number. If you had it working with WMQ v6 queue manager, is this the same queue manager? If not, then make sure each queue manager uses a different port number (i.e. 1415, 1416, etc...)
I got same problem. but i resolved this by :
1. created a listener manually (define lstr(lstr1) port(xxxx) control(qmgr)
2. setmqaut mcauser('mqm').

Resources