I'm using Apache Artemis ActiveMQ 2.6.3 as an MQTT broker embedded in a Spring 5 application:
#Bean(initMethod = "start", destroyMethod = "stop")
fun embeddedActiveMQ(securityManager: ActiveMQJAASSecurityManager) =
EmbeddedActiveMQ().apply {
setConfiguration(getEmbeddedActiveMQConfiguration())
setConfigResourcePath("activemq-broker.xml")
setSecurityManager(securityManager)
}
private fun getEmbeddedActiveMQConfiguration() =
ConfigurationImpl().apply {
addAcceptorConfiguration("netty", DefaultConnectionProperties.DEFAULT_BROKER_URL)
addAcceptorConfiguration("mqtt", "tcp://$host:$mqttPort?protocols=MQTT")
name = brokerName
bindingsDirectory = "$dataDir${File.separator}bindings"
journalDirectory = "$dataDir${File.separator}journal"
pagingDirectory = "$dataDir${File.separator}paging"
largeMessagesDirectory = "$dataDir${File.separator}largemessages"
isPersistenceEnabled = persistence
connectionTTLOverride = 60000
}
Although I'm setting the connection TTL to 60 seconds in the above Kotlin code as suggested in the documentation and the client disconnected and terminated an hour ago, the log shows the following entries:
2020-06-22 10:57:03,890 [Thread-29 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#ade4717)] DEBUG o.a.a.a.core.server.impl.QueueImpl - Scanning for expires on client1.some-topic
2020-06-22 10:58:03,889 [Thread-35 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#ade4717)] DEBUG o.a.a.a.core.server.impl.QueueImpl - Scanning for expires on client1.some-topic
Based on these log entries, I'm afraid that "dead" connection resources are never cleaned up by the server.
What should I do to actually remove the "dead" connections from the server to avoid leaking resources?
The broker will often create resources like addresses, queues, etc. to deal with clients. In the case of MQTT clients the broker will create queues which essentially represent the client's subscriptions.
In this particular case a queue named client1.some-topic has been created for an MQTT subscription and the broker is scanning that queue for expired messages. At this point it looks like the broker is working as designed.
When a client disconnects without unsubscribing what the broker does with the subscription depends on whether the client used a clean session or not.
If the client used a clean session then the broker will delete the subscription queue when the client disconnects (even in the event of a failure).
Otherwise the broker is obliged to hold on to the subscription queue and route messages to it. If the client never reconnects to unsubscribe then the subscription may fill up with lots of messages and trigger the broker's paging mode and eventually even limit message production altogether. In this case the client can either reconnect and unsubscribe or the subscription queue can be removed administratively.
Related
I am trying to send a message to the Qpid broker over the AMQP 1.0 protocol. The queue is named queue2 and it is already created under default virtualhost. However, producer.send(message) is getting stuck forever. The same code is working for connecting to Azure Service Bus. I'm using qpid-jms-client 0.58. Producer code is:
Hashtable<String, String> hashtable = new Hashtable<>();
hashtable.put("connectionfactory.myFactoryLookup", protocol + "://" + url + "?amqp.idleTimeout=120000&amqp.traceFrames=true");
hashtable.put("queue.myQueueLookup", queueName);
hashtable.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.qpid.jms.jndi.JmsInitialContextFactory");
Context context = new InitialContext(hashtable);
ConnectionFactory factory = (ConnectionFactory) context.lookup("myFactoryLookup");
queue = (Destination) context.lookup("myQueueLookup");
Connection connection = factory.createConnection(username, password);
connection.setExceptionListener(new AmqpConnectionFactory.MyExceptionListener());
connection.start();
Session session=connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// session.createQueue("queue3");
Queue queue = new JmsQueue("queue2");
MessageProducer messageProducer = session.createProducer(queue);
TextMessage textMessage = session.createTextMessage("new message");
messageProducer.send(textMessage)
I can see Connection and session is successfully established on Qpid broker dashboard:
Thread dump for application at time of producing
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000078327c550> (a org.apache.qpid.jms.provider.ProgressiveProviderFuture)
at java.lang.Object.wait(Object.java:502)
at org.apache.qpid.jms.provider.ProgressiveProviderFuture.sync(ProgressiveProviderFuture.java:154)
- locked <0x000000078327c550> (a org.apache.qpid.jms.provider.ProgressiveProviderFuture)
at org.apache.qpid.jms.JmsConnection.send(JmsConnection.java:773)
at org.apache.qpid.jms.JmsNoTxTransactionContext.send(JmsNoTxTransactionContext.java:37)
at org.apache.qpid.jms.JmsSession.send(JmsSession.java:964)
at org.apache.qpid.jms.JmsSession.send(JmsSession.java:843)
at org.apache.qpid.jms.JmsMessageProducer.sendMessage(JmsMessageProducer.java:252)
at org.apache.qpid.jms.JmsMessageProducer.send(JmsMessageProducer.java:182)
I have tried to run this example which gave the same result.
In general if the client is not sending it is because the remote has not granted it credit to do so. You can debug the client state using the protocol trace feature (just set PN_TRACE_FRM=true and run the client).
Likely you have misconfigured the Broker-J somehow and the destination you've created doesn't allow any messages to be sent or you've sent enough that you've tripped the write limit. You should consult the configuration guide and review what you've already setup.
Okay Finally got the issue. Filesystem is over 90 per cent full, enforcing flow control. So deleted files from my machine and it started working.
https://qpid.apache.org/releases/qpid-broker-j-7.0.7/book/Java-Broker-Runtime-Disk-Space-Management.html
In spring Documentation --> 32.6 TCP Adapters it is mentioned that we use clientMode = "true" then the inbound adapter is responsible for the connection with external server.
I have created a flow in which the TCP Adapter with client connection factory makes connection with external server the code for the flow is :
IntegrationFlow flow = IntegrationFlows.from(Tcp.inboundAdapter(Tcp.nioClient(hostConnection.getIpAddress(),Integer.parseInt(hostConnection.getPort()))
.serializer(customSerializer)
.deserializer(customSerializer)
.id(hostConnection.getConnectionNumber())).clientMode(true).retryInterval(1000).errorChannel("testChannel").id(hostConnection.getConnectionNumber()+"adapter"))
.enrichHeaders(f->f.header("CustomerCode",hostConnection.getConnectionNumber()))
.channel(directChannel())
.handle(Jms.outboundAdapter(ConnectionFactory())
.destination(hostConnection.getConnectionNumber()))
.get();
theFlow = this.flowContext.registration(flow).id(hostConnection.getConnectionNumber()+"outflow").register();
I have created multiple flow by iterating over the list of connections and
iterate the above code in for loop and register them in flowcontext with unique ID.
My clients are created successfully with no issue and then establish there connection as supported by topology.
Issue :
I have counted the number of client connection created successfully so I have counted that 7 client connection (7 Integration flow) made successfully and they initiate connection from themselves.
when I create 8th client connection (8th flow created and registered successfully) but the .clientMode(true) is not working means the client don't initiate connection itself after first failure means it try for the first time to make connection if connected successfully then no issue but in case of failure it don't retry again.
Also my other created clients i.e 7 clients connection which are created successfully they also stopped initiating connection from itself when they got disconnected.
Note: There is no issue with flow only the TCP Adapters they stop initiating the connection
The flow is created and registered successfully as there is no issue it is because when I run a control bus command #adapter_id.retryConnection() it got connected with the server.
I don't understand that what is the issue with my flow that i couldn't initiate a connection after a particular count i.e seven or is there limitation in creating number of clients.
One possibility is the taskScheduler's thread pool is exhausted - that shouldn't happen with the above configuration, but it depends on what else is in the application. Take a thread dump (e.g. jstack) to see what the taskScheduler threads are doing.
See the documentation for information about how to configure the threads in the scheduler. However, if it solves it, you should really figure out what task(s) are using scheduler threads for long tasks.
Also turn on DEBUG logging to see if it provides any clues.
I've been tasked with evaluating activemq-artemis for JMS clients. I have RabbmitMQ experience, but none with activemq-artemis/JMS.
I installed artemis to my local machine, created a new broker per the instructions, and set it up as a windows service. The windows service starts and stops just fine. I've made no changes to the broker.xml file.
For my first test I'm trying to perform a JMS Queue produce/consume from a stand alone java program. I'm using the code from the Artemis User Manual in the Using JMS section, (without using JNDI):
TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class.getName());
ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF,transportConfiguration);
Queue orderQueue = ActiveMQJMSClient.createQueue("OrderQueue");
Connection connection = cf.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(orderQueue);
MessageConsumer consumer = session.createConsumer(orderQueue);
connection.start();
TextMessage message = session.createTextMessage("This is an order");
producer.send(message);
TextMessage receivedMessage = (TextMessage)consumer.receive();
System.out.println("Got order: " + receivedMessage.getText());
When I run this code, I get the following error:
WARN: AMQ212054: Destination address=jms.queue.OrderQueue is blocked. If the system is configured to block make sure you consume messages on this configuration.
My research hasn't been conclusive on if this is a server side setting, or having the producer send without blocking. I haven't been able to find a producer send method that has a blocking boolean, only persistence. Any ideas on where to focus? Thanks.
Edit: new address-setting element added to broker.xml dedicated to this Queue:
<address-setting match="jms.queue.OrderQueue">
<max-size-bytes>104857600</max-size-bytes>
<page-size-bytes>10485760</page-size-bytes>
<address-full-policy>PAGE</address-full-policy>
</address-setting>
I found this on further research in the user manual:
max-disk-usage The max percentage of data we should use from disks.
The System will block while the disk is full. Default=100
and in the log after service startup with no messages published yet:
WARN [org.apache.activemq.artemis.core.server] AMQ222210: Storage usage is beyond max-disk-usage. System will start blocking producers.
so I think no matter my address settings, it would start to block. Looking at the max-disk-usage setting in broker.xml, it was set to 90. Documentation default says 100, I set to that, no startup log warnings, and my test pub/sub code now works.
This warn message comes when address policy set to BLOCK and memory reached. Check address policy set in broker.xml. If it is set to BLOCK, change it to PAGE. Or consume pending messages from OrderQueue.
By default max-disk-usage value is set as 90(%) and if the remaining free space size is less than 10%, then this warn message will be shown and no messages will be received until you adjust the parameter or free up space beyond 10%.
Suppose that after 30s (default client-failure-check-period) the client did not receive any packets from the server as a result of net connection problems.
Will the client now be disconnect from session/connection?
Suppose now I add this configration :
<retry-interval>1000</retry-interval>
<retry-interval-multiplier>1.5</retry-interval-multiplier>
<max-retry-interval>60000</max-retry-interval>
<reconnect-attempts>1000</reconnect-attempts>
What will happen now?
Will the client still get disconnected from session/connection but only after trying to reconnect 1000 times (until net is available again)? Or will it ignore the need to do disconnect?
Regarding your first question, and according to HornetQ documentation, that can be found under 17.2. Detecting failure from the client side:
As long as the client is receiving data from the server it will consider the connection to be still alive.
If the client does not receive any packets for client-failure-check-period milliseconds then it will consider the connection failed and will either initiate failover, or call any FailureListener instances (or ExceptionListener instances if you are using JMS) depending on how it has been configured.
Therefore the client will assume that the connection was in fact lost and start its failure processes.
For your second question, also according to the HornetQ documentation, that can be found under 34.3. Configuring reconnection/reattachment attributes:
reconnect-attempts. This optional parameter determines the total number of reconnect attempts to make before giving up and shutting down. A value of -1 signifies an unlimited number of attempts. The default value is 0.
So, yes, the connection will be dropped after 1000 attempts.
I'm looking forward to implement a somewhat intelligent MQ comms module, which should be tolerant for the outages in the network connection. Basically it should try to reconnect each 5 seconds if connection was lost.
The problem is the following. I use the following code for reading:
queueMessage = new MQMessage();
queueMessage.Format = MQC.MQFMT_STRING;
queueGetMessageOptions = new MQGetMessageOptions();
queueGetMessageOptions.Options = MQC.MQGMO_SYNCPOINT + MQC.MQGMO_WAIT + MQC.MQGMO_FAIL_IF_QUIESCING;
queueGetMessageOptions.WaitInterval = 50;
producerQueue.Get(queueMessage, queueGetMessageOptions);
msg = queueMessage.ReadBytes(queueMessage.MessageLength);
(Of course I successfully connect to the queuemanager before etc.)
I got the following issue: when this routine runs, but at the time of .Get there's no connection, the code simply hangs and stays in the .Get.
I use a timer to see if there's a timeout (in theory even that shouldn't be necessary, is that right?) and at the timeout I try to reconnect. BUT when this timeout expires, I still see that the queuemanager reports that it's connected, while its clearly not (no physical connection is present anymore). This issue has popped up since I use SYNCPOINT, and I experience the same when I cut connection during writing, or in this case I try to force a Disconnect on the queuemanager. So please help, what settings shall I use to avoid getting stuck in Get and Put and rather have an MQException thrown or something controllable?
Thanks!
UPDATE: I used the following code to connect to the QueueManager.
Hashtable props = new Hashtable();
props.Add(MQC.HOST_NAME_PROPERTY, Host);
props.Add(MQC.PORT_PROPERTY, Port);
props.Add(MQC.CHANNEL_PROPERTY, ChannelInfo);
if(User!="") props.Add(MQC.USER_ID_PROPERTY, User);
if(Password!="") props.Add(MQC.PASSWORD_PROPERTY, Password);
props.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_MANAGED);
queueManager = new MQQueueManager(QueueManagerName, props);
producerQueue = queueManager.AccessQueue(
ProducerQueueName,
MQC.MQOO_INPUT_AS_Q_DEF // open queue for input
+ MQC.MQOO_FAIL_IF_QUIESCING); // but not if MQM stopping
consumerQueue = queueManager.AccessQueue(
ConsumerQueueName,
MQC.MQOO_OUTPUT + MQC.MQOO_BROWSE + MQC.MQOO_INPUT_AS_Q_DEF // open queue for output
+ MQC.MQOO_FAIL_IF_QUIESCING); // but not if MQM stopping
Needless to say that normally the code works well. Read/Write, connect/disconnect works as it should, I only have to figure out the current issue.
Thanks!
What version of MQ are you using? For automatic reconnection to work the queue manager need to be at least at MQ v701 and MQ .NET client needs to be a MQ v7.1 level.
Assuming you are using MQ v7.1 .NET client, you need to specify reconnect option during connection create. You will need to enable reconnection by adding something like:
props.Add(MQC.CONNECT_OPTIONS_PROPERTY, MQC.MQCNO_RECONNECT);
Reconnection can be enabled/disabled from mqclient.ini file also.
But what is surprising is why the Get/Put are hanging when there is no network connection. Hope you are not connecting a queue manager running on the same machine as your application. There is no need to set any timer or something like that. You can issue MQ calls and if there is anything wrong with connection, an exception will be thrown.
Update:
I think you are referring to IsConnected property of MQQueueManager class. The documentation says the value of this property: "If true, a connection to the queue manager has been made, and is not known to be broken. Any calls to IsConnected do not actively attempt to reach the queue manager, so it is possible that physical connectivity can break, but IsConnected can still return true. The IsConnected state is only updated when activity, for example, putting a message, getting a message, is performed on the queue manager.
If false, a connection to the queue manager has not been made, or has been broken, or has been disconnected."
As you can see a True value does not mean the connection is still ON. My suggestion would be to call a method, Put/Get and handle any exception thrown.
Put/Get/Disconnect calls hanging appears to be a problem. My suggestion would be raise a PMR with IBM.