Apache Qpid JMS client message producer getting stuck and not delivering to queue - jms

I am trying to send a message to the Qpid broker over the AMQP 1.0 protocol. The queue is named queue2 and it is already created under default virtualhost. However, producer.send(message) is getting stuck forever. The same code is working for connecting to Azure Service Bus. I'm using qpid-jms-client 0.58. Producer code is:
Hashtable<String, String> hashtable = new Hashtable<>();
hashtable.put("connectionfactory.myFactoryLookup", protocol + "://" + url + "?amqp.idleTimeout=120000&amqp.traceFrames=true");
hashtable.put("queue.myQueueLookup", queueName);
hashtable.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.qpid.jms.jndi.JmsInitialContextFactory");
Context context = new InitialContext(hashtable);
ConnectionFactory factory = (ConnectionFactory) context.lookup("myFactoryLookup");
queue = (Destination) context.lookup("myQueueLookup");
Connection connection = factory.createConnection(username, password);
connection.setExceptionListener(new AmqpConnectionFactory.MyExceptionListener());
connection.start();
Session session=connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// session.createQueue("queue3");
Queue queue = new JmsQueue("queue2");
MessageProducer messageProducer = session.createProducer(queue);
TextMessage textMessage = session.createTextMessage("new message");
messageProducer.send(textMessage)
I can see Connection and session is successfully established on Qpid broker dashboard:
Thread dump for application at time of producing
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000078327c550> (a org.apache.qpid.jms.provider.ProgressiveProviderFuture)
at java.lang.Object.wait(Object.java:502)
at org.apache.qpid.jms.provider.ProgressiveProviderFuture.sync(ProgressiveProviderFuture.java:154)
- locked <0x000000078327c550> (a org.apache.qpid.jms.provider.ProgressiveProviderFuture)
at org.apache.qpid.jms.JmsConnection.send(JmsConnection.java:773)
at org.apache.qpid.jms.JmsNoTxTransactionContext.send(JmsNoTxTransactionContext.java:37)
at org.apache.qpid.jms.JmsSession.send(JmsSession.java:964)
at org.apache.qpid.jms.JmsSession.send(JmsSession.java:843)
at org.apache.qpid.jms.JmsMessageProducer.sendMessage(JmsMessageProducer.java:252)
at org.apache.qpid.jms.JmsMessageProducer.send(JmsMessageProducer.java:182)
I have tried to run this example which gave the same result.

In general if the client is not sending it is because the remote has not granted it credit to do so. You can debug the client state using the protocol trace feature (just set PN_TRACE_FRM=true and run the client).
Likely you have misconfigured the Broker-J somehow and the destination you've created doesn't allow any messages to be sent or you've sent enough that you've tripped the write limit. You should consult the configuration guide and review what you've already setup.

Okay Finally got the issue. Filesystem is over 90 per cent full, enforcing flow control. So deleted files from my machine and it started working.
https://qpid.apache.org/releases/qpid-broker-j-7.0.7/book/Java-Broker-Runtime-Disk-Space-Management.html

Related

JMSWMQ0018: Failed to connect to queue manager '' with connection mode 'Bindings' and host name '(1414)'

I'm trying tho use simplified the JMS MQ connection from example JmsPutGet.java
private static void testQueueManagerNew() throws JMSException {
JmsFactoryFactory ff = JmsFactoryFactory.getInstance(WMQConstants.WMQ_PROVIDER);
JmsConnectionFactory cf = ff.createConnectionFactory();
cf.setStringProperty(WMQConstants.WMQ_HOST_NAME, "");
cf.setIntProperty(WMQConstants.WMQ_PORT, 1414);
cf.setStringProperty(WMQConstants.WMQ_CHANNEL, "MY_CNL");
// cf.setIntProperty(WMQConstants.WMQ_CONNECTION_MODE, WMQConstants.WMQ_CM_CLIENT);
cf.setIntProperty(WMQConstants.WMQ_CONNECTION_MODE, WMQConstants.WMQ_CM_BINDINGS);
cf.setStringProperty(WMQConstants.WMQ_QUEUE_MANAGER, ""); //it should use default QM
JMSContext context = cf.createContext();
Destination destination = context.createQueue("queue:///" + "MY_QUEUE");
long uniqueNumber = System.currentTimeMillis() % 1000;
TextMessage message = context.createTextMessage("Your lucky number today is " + uniqueNumber);
JMSProducer producer = context.createProducer();
producer.send(destination, message);
LOGGER.info("Sent message:{}{}", message, System.lineSeparator());
JMSConsumer consumer = context.createConsumer(destination); // autoclosable
String receivedMessage = consumer.receiveBody(String.class, 15000); // in ms or 15 seconds
LOGGER.info("Rreceived message:{}{}", receivedMessage, System.lineSeparator());
}
The changes I did are using the defaut Queue Manager (WMQConstants.WMQ_QUEUE_MANAGER is empty string), using the 'Binding' Connection Mode (WMQConstants.WMQ_CM_BINDINGS) and removing the host (WMQConstants.WMQ_HOST_NAME is empty string). I received following exception:
com.ibm.msg.client.jms.DetailedIllegalStateRuntimeException: JMSWMQ0018: Failed to connect to queue manager '' with connection mode 'Bindings' and host name '(1414)'.
at com.ibm.msg.client.jms.DetailedIllegalStateException.getUnchecked(DetailedIllegalStateException.java:274)
at com.ibm.msg.client.jms.internal.JmsErrorUtils.convertJMSException(JmsErrorUtils.java:173)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl.createContext(JmsConnectionFactoryImpl.java:478)
at poc.ibmmq.defaultqm.DefaultQM.testQueueManagerNew(DefaultQM.java:86)
at poc.ibmmq.defaultqm.DefaultQM.main(DefaultQM.java:59)
Caused by: com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2059' ('MQRC_Q_MGR_NOT_AVAILABLE').
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:203)
at com.ibm.msg.client.wmq.internal.WMQConnection.<init>(WMQConnection.java:418)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createV7ProviderConnection(WMQConnectionFactory.java:8475)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createProviderConnection(WMQConnectionFactory.java:7815)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl._createConnection(JmsConnectionFactoryImpl.java:303)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl.createContext(JmsConnectionFactoryImpl.java:444)
It works with 'Client' Connection Mode when I specify host, but not with Binding. Also 'Binding' connection mode works when Queue Manager is specified (no default used).
Is there necessary any extra Queue Manager setting?
In order to connect to a queue manager with 'Binding' connection mode, the queue manager must be on the same machine (same O/S image) as the application. 'Binding' connection mode uses inter-process - aka shared memory - to make the connection.
When making a connection using 'Client' connection mode, the application connects to the queue manager using a TCP/IP socket, with a connection made to the host and port number provided.
When making a connection using 'Client' connection mode, it is not necessary to provide a queue manager name on the connection call if you are happy to connect to whichever queue manager appears at the other end of the TCP/IP socket.
When making a connection using 'Binding' connection mode, the queue manager name is used to determine which local process to make the inter-process request to. You can only omit this name if you have nominated one of your locally hosted queue managers to be the default queue manager on this machine. It is not good enough to only have one queue manager, you must still nominate it to be the default.
In order to see whether you have any queue manager marked as the default on your machine, issue the following command:
dspmq -o default
If you have not got a default queue manager, you can make one of your locally hosted queue managers into the default one by following the instructions here.

Apache Artemis doesn't stop scanning for expires

I'm using Apache Artemis ActiveMQ 2.6.3 as an MQTT broker embedded in a Spring 5 application:
#Bean(initMethod = "start", destroyMethod = "stop")
fun embeddedActiveMQ(securityManager: ActiveMQJAASSecurityManager) =
EmbeddedActiveMQ().apply {
setConfiguration(getEmbeddedActiveMQConfiguration())
setConfigResourcePath("activemq-broker.xml")
setSecurityManager(securityManager)
}
private fun getEmbeddedActiveMQConfiguration() =
ConfigurationImpl().apply {
addAcceptorConfiguration("netty", DefaultConnectionProperties.DEFAULT_BROKER_URL)
addAcceptorConfiguration("mqtt", "tcp://$host:$mqttPort?protocols=MQTT")
name = brokerName
bindingsDirectory = "$dataDir${File.separator}bindings"
journalDirectory = "$dataDir${File.separator}journal"
pagingDirectory = "$dataDir${File.separator}paging"
largeMessagesDirectory = "$dataDir${File.separator}largemessages"
isPersistenceEnabled = persistence
connectionTTLOverride = 60000
}
Although I'm setting the connection TTL to 60 seconds in the above Kotlin code as suggested in the documentation and the client disconnected and terminated an hour ago, the log shows the following entries:
2020-06-22 10:57:03,890 [Thread-29 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#ade4717)] DEBUG o.a.a.a.core.server.impl.QueueImpl - Scanning for expires on client1.some-topic
2020-06-22 10:58:03,889 [Thread-35 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#ade4717)] DEBUG o.a.a.a.core.server.impl.QueueImpl - Scanning for expires on client1.some-topic
Based on these log entries, I'm afraid that "dead" connection resources are never cleaned up by the server.
What should I do to actually remove the "dead" connections from the server to avoid leaking resources?
The broker will often create resources like addresses, queues, etc. to deal with clients. In the case of MQTT clients the broker will create queues which essentially represent the client's subscriptions.
In this particular case a queue named client1.some-topic has been created for an MQTT subscription and the broker is scanning that queue for expired messages. At this point it looks like the broker is working as designed.
When a client disconnects without unsubscribing what the broker does with the subscription depends on whether the client used a clean session or not.
If the client used a clean session then the broker will delete the subscription queue when the client disconnects (even in the event of a failure).
Otherwise the broker is obliged to hold on to the subscription queue and route messages to it. If the client never reconnects to unsubscribe then the subscription may fill up with lots of messages and trigger the broker's paging mode and eventually even limit message production altogether. In this case the client can either reconnect and unsubscribe or the subscription queue can be removed administratively.

IBM WebSpher MQ Queue.Get() call hangs forever in case of loss in connectivity

I'm using IBM MQ version 8.0.0.0 in a .NET application using C#. Now I'm trying to read messages from a queue. I'm using the below code to read the messages from the queue.
.....
Hashtable props = new Hashtable();
props.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_MANAGED);
props.Add(MQC.CONNECT_OPTIONS_PROPERTY, MQC.MQCNO_RECONNECT_Q_MGR); // Reconnect option
openOptions = MQC.MQOO_INPUT_SHARED | MQC.MQOO_FAIL_IF_QUIESCING;
queueManager = new MQQueueManager(queueManagerName, props);
this.queue = queueManager.AccessQueue(queueName, openOptions);
....
MQGetMessageOptions gmo = new MQGetMessageOptions();
gmo.Options = MQC.MQGMO_FAIL_IF_QUIESCING
| MQC.MQGMO_WAIT | MQC.MQGMO_SYNCPOINT;
gmo.MatchOptions = MQC.MQMO_NONE;
gmo.WaitInterval = 5000; // I'm specifying this
var message = new MQMessage();
this.queue.Get(message, gmo); // Waits here forever in case connection is lost to IBM MQ.
.........
.........
Now in case, there is a loss of connectivity to the MQ server AFTER connection is established but BEFORE a queue.Get() call is issued, I'm seeing that the .GET() call waits forever and doesn't stop despite specifying the WAIT_INTERVAL.
Also, I observed that as soon as connectivity is restored, the .Get() call returns immediately with the message that it has read from the queue.
Am I doing something wrong?
Edit:
Added the queueManager Creation code with the properties, one of which instructs the client to reconnect if possible to the same queue manager.
From this observation:
Also, I observed that as soon as connectivity is restored, the .Get() call returns immediately with the message that it has read from the queue.
The connection to queue manager was lost when the GET call was in progress. So the MQ .NET client is attempting to reconnect to queue manager. While reconnection attempts are going on, the application will find the method call as 'hanging'. This is normal. So the question is have you enabled automatic reconnection in your application? Show complete code.
Update
It's a expected behavior because the Get call is internally attempting to reconnect to queue manager. You can:
1) Reduce the reconnection timeout in mqclient.ini file. An example below.
Channels:
MQReconnectTimeout = 100
2) Check why queue manager is down and bring it up.

How can I fix a new activemq-artemis install blocking issue?

I've been tasked with evaluating activemq-artemis for JMS clients. I have RabbmitMQ experience, but none with activemq-artemis/JMS.
I installed artemis to my local machine, created a new broker per the instructions, and set it up as a windows service. The windows service starts and stops just fine. I've made no changes to the broker.xml file.
For my first test I'm trying to perform a JMS Queue produce/consume from a stand alone java program. I'm using the code from the Artemis User Manual in the Using JMS section, (without using JNDI):
TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class.getName());
ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF,transportConfiguration);
Queue orderQueue = ActiveMQJMSClient.createQueue("OrderQueue");
Connection connection = cf.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(orderQueue);
MessageConsumer consumer = session.createConsumer(orderQueue);
connection.start();
TextMessage message = session.createTextMessage("This is an order");
producer.send(message);
TextMessage receivedMessage = (TextMessage)consumer.receive();
System.out.println("Got order: " + receivedMessage.getText());
When I run this code, I get the following error:
WARN: AMQ212054: Destination address=jms.queue.OrderQueue is blocked. If the system is configured to block make sure you consume messages on this configuration.
My research hasn't been conclusive on if this is a server side setting, or having the producer send without blocking. I haven't been able to find a producer send method that has a blocking boolean, only persistence. Any ideas on where to focus? Thanks.
Edit: new address-setting element added to broker.xml dedicated to this Queue:
<address-setting match="jms.queue.OrderQueue">
<max-size-bytes>104857600</max-size-bytes>
<page-size-bytes>10485760</page-size-bytes>
<address-full-policy>PAGE</address-full-policy>
</address-setting>
I found this on further research in the user manual:
max-disk-usage The max percentage of data we should use from disks.
The System will block while the disk is full. Default=100
and in the log after service startup with no messages published yet:
WARN [org.apache.activemq.artemis.core.server] AMQ222210: Storage usage is beyond max-disk-usage. System will start blocking producers.
so I think no matter my address settings, it would start to block. Looking at the max-disk-usage setting in broker.xml, it was set to 90. Documentation default says 100, I set to that, no startup log warnings, and my test pub/sub code now works.
This warn message comes when address policy set to BLOCK and memory reached. Check address policy set in broker.xml. If it is set to BLOCK, change it to PAGE. Or consume pending messages from OrderQueue.
By default max-disk-usage value is set as 90(%) and if the remaining free space size is less than 10%, then this warn message will be shown and no messages will be received until you adjust the parameter or free up space beyond 10%.

ActiveMQ - handle connection, session, producer and concumer opon failover

I do use failover transport feature by using the following pattern in the broker URL
failover:(tcp://host:port)
Init code goes as follow:
factory = new PooledConnectionFactory(BROKER_URL);
connection = factory.createConnection();
connection.start();
the put message code looks more or less like this:
session = connection.createSession( false, Session.AUTO_ACKNOWLEDGE );
Destination destQueue = new ActiveMQQueue(queue);
MessageProducer producer = session.createProducer(destQueue);
TextMessage msg = session.createTextMessage(message);
producer.send(msg);
When a failover occurs -
[org.apache.activemq.transport.failover.FailoverTransport] Transport (broker) failed, reason: , attempting to automatically reconnect: java.net.SocketException: recv failed: Connection aborted by peer
and got reconnected after
[org.apache.activemq.transport.failover.FailoverTransport] Failed to connect to [broker] after: 10 attempt(s) continuing to retry.
08:55:29,596 INFO [org.apache.activemq.transport.failover.FailoverTransport] Successfully reconnected to broker
do I have to reinitiate a connection? Or to be more specific, do I have to do anything with the connection object to be able to produce/consume message after the failover?
thanks
The whole point of the failover transport is to handle the reconnection for you. The logs you've shown indicate a successful reconnect cycle where the transport has continued to retry to connect to a broker and eventually did so.

Resources