Change AMQP inactivity time-out - amqp

as I understand, there is an inactivity time-out in AMQP protocol. It's set to 15 minutes in Azure Service Bus.
Is it possible to change that time-out? OperationTimeout is ignored in case of AMQP protocol.

The inactivity timeout in AMQP protocol is called idle-timeout of a connection. Most, if not all, client libraries support this property. The Azure Service Bus sets this value to 4 minutes. This cannot be changed but a client can set its own idle-timeout to make the service send heartbeats during idle time. If allowed by the library, the application may also overwrite the idle timer interval to send heartbeats more often.
The 15 minute timeout you mentioned seems to be the entity idle timeout. This is Service Bus specific behavior. If an entity (queue or topic) has no activity for a pre-defined time window, the entity is unloaded (meaning all protocol connections are closed). This value cannot be changed. The only way to keep the entity active is by sending messages over the sending link, or keeping an outstanding credit on the receiving link.

Related

QPID JMS Heartbeat / Keepalive

is it possible to set a heartbeat or keep-alive for a JMS consumer using QPID JMS? I've found some configuration of QPID which can bet set at the URL like an idleTimeout but I've not found an option to send empty frames for a limited time period.
Regards
The Qpid JMS client allows you to configure how long the idle timeout is which controls when the client will consider the remote to have failed should there be no traffic coming from the remote either in the form of messages or possibly as empty frames in order to keep the connection from idling out. The client will itself respond the the remote peer's requested idle timeout interval by sending as needed an empty frame to ensure that the remote doesn't drop the connection due to inactivity.
If you are seeing drops in connections due to idle timeout on a server then it is likely you have not configured the server to provide an Idle timeout value in the Open performative that it sends to the client.
Reading the specification section on Idle Timeout of a Connection can shed some light on how this works.

Which timeout has TIBCO EMS while waiting for acknowledge?

We are developing a solution using TIBCO-EMS, and we have a question about its behaviour.
When using CLIENT_ACKNOWLEDGE mode to connect, the client acknowledges the received message. We'd like to know for how long does TIBCO wait for acknowledgement, and if this time is configurable by the system admin.
By default, the EMS server waits forever for the acknowledgement of the message.
As long as the session is still alive, the transaction will not be discarded and the server waits for an acknowledgement or rollback.
There is however a setting within the server disconnect_non_acking_consumers where the client will be disconnected, if there are more pending messages (not acknowledged) then the queue limit does allow to store (maxbytes, maxmsgs). In this case, the server sends a connection reset to get rid of the client.
Sadly the documentation doesn't state this explicitly and the only public record I found was a knowledge base entry: https://support.tibco.com/s/article/Tibco-KnowledgeArticle-Article-33925

Watson IoT QoS1/2 Retries

Does anyone know what the Watson IoT broker does if publishes a QoS1 o 2 message and doesn't receive the appropriate acknowledgement from the client? Does it implement a time out (say 20 seconds or so) and then resend the message again? It seems that some brokers do this while others only resend the message on a new connection (if retain is set to 1 of course). The MQTT spec is a little vague on this point.
the message would be considered in-flight for the client that hasn’t acknowledged it, and that message redelivery will only occur when that client disconnects and reconnects (but only if the client was clean session = 0)
For QoS1 and 2:
At least once (QoS1)
With quality of service level 1 (QoS1), the message is always delivered at least once. If a failure occurs before an acknowledgment is received by the sender, a message can be delivered multiple times. The message must be stored locally at the sender until the sender receives confirmation that the message was published by the receiver. The message is stored in case the message must be sent again.
Exactly once (QoS2)
The "exactly once" quality of service level 2 (QoS2) is the safest, but slowest mode of transfer. The message is always delivered exactly once and must also be stored locally at the sender, until the sender receives confirmation that the message was published by the receiver. The message is stored in case the message must be sent again. With quality of service level 2, a more sophisticated handshaking and acknowledgment sequence is used than for level 1 to ensure that messages are not duplicated
MQTT keep alive interval
The MQTT keep alive interval, which is measured in seconds, defines the maximum time that can pass without communication between the client and broker. The MQTT client must ensure that, in the absence of any other communication with the broker, a PINGREQ packet is sent. The keep alive interval allows both the client and the broker to detect that the network failed, resulting in a broken connection, without needing to wait for the TCP/IP timeout period to be reached.
If your Watson IoT Platform MQTT clients use shared subscriptions, the keep alive interval value can be set only to between 1 and 3600 seconds. If a value of 0 or a value that is greater than 3600 is requested, the Watson IoT Platform broker sets the keep alive interval to 3600 seconds.
Retained messages
Watson IoT Platform provides limited support for the retained messages feature of MQTT messaging. If the retained message flag is set to true in an MQTT message that is sent from a device, gateway, or application to Watson IoT Platform, the message is handled as an unretained message. Watson IoT Platform organizations are not authorized to publish retained messages. The Watson IoT Platform service overrides the retained message flag when it is set to true and processes the message as if the retained message flag is set to false.

Relation between DISCINT and Keep Alive interval in WebSphere MQ server?

We had issues with lot of Applications connecting to MQ server without properly doing a disconnect. Hence we introduced DISCINT on our server connection channels with a value 1800 sec which we found ideal for our transactions. But our Keep Alive interval is pretty high with 900 sec. We would like to reduce that less than 300 as suggested by mqconfig util. But before doing that I would like to know if this is going to affect our disconnect interval value and whether it is going to override our disconnect interval value and make more frequent disconnects which will be a performance hit for us.
How does both these values work and how they are related?
Thanks
TCP KeepAlive works below the application layer in the protocol stack, so it does not affect the disconnecting of the channel configured by the DISCINT.
However lowering the value can result in more frequent disconnects, if your network is unreliable, for example has intermittent very short (shorter then the current KeepAlive, but longer then the new) periods when packets are not flowing.
I think the main difference is, that DISCINT is for disconnecting a technically working channel, which is not used for a given period, while KeepAlive is for detecting a not working TCP connection.
And MQ provides means to detect not working connections in the application layer too, configured by the heartbeat interval.
These may help:
http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.5.0/com.ibm.mq.con.doc/q015650_.htm
http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.5.0/com.ibm.mq.ref.con.doc/q081900_.htm
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.5.0/com.ibm.mq.ref.con.doc/q081860_.htm

WebSphere MQ DISC vs KAINT on SVRCONN channels

we have a major problem with many of our Applications making improper connections (SVRCONN) with queue manager and not issuing MQDISC when connection not required. This causes lot of idle stale connections and prevents Application from making new connections and fails with CONNECTION BROKEN (2009) error. We have been restricting Application connections with clientidle parameter in our Windows MQ on version 7.0.1.8 but when we migrated to MQ v7.5.0.2 in Linux platform we are deciding on the best option available in the new version. We do not have clientidle anymore in ini file for v7.5 but has DISCINT & KAINT in SVRCONN channels. I have been going through the advantages and disadvantages of both for our scenario of Application making connections through SVRCONN channels and leave connections open without issuing a disconnect. Which of these above channel attributes is ideal for us. Any suggestions? Does any of these take precedence over the other??
First off, KAINT controls TCP functions, not MQ functions. That means for it to take effect, the TCP Keepalive function must be enabled in the qm.ini TCP stanza. Nothing wrong with this, but the native HBINT and DISCINT are more responsive than delegating to TCP. This addresses the problem that the OS hasn't recognized that a socket's remote partner is gone and cleaned up the socket. As long as the socket exists and MQ's channel is idle, MQ won't notice. When TCP cleans the socket up, MQ's exception callback routine sees it immediately and closes the channel.
Of the remaining two, DISCINT controls the interval after which MQ will terminate an idle but active socket whereas HBINT controls the interval after which MQ will shut down an MCA attached to an orphan socket. Ideally, you will have a modern MQ client and server so you can use both of these.
The DISCINT should be a value longer than the longest expected interval between messages if you want the channel to stay up during the Production shift. So if a channel should have message traffic at least once every 5 minutes by design, then a DISCINT longer than 5 minutes would be required to avoid channel restart time.
The HBINT actually flows a small heartbeat message over the channel, but only will do so if HBINT seconds have passed without a message. Thsi catches the case that the socket is dead but TCP hasn't yet cleaned it up. HBINT allows MQ to discover this before the OS and take care of it, including tearing down the socket.
In general, really low values for HBINT can cause lots of unnecessary traffic. For example, HBINT(5) would flow a heartbeat every five second interval in which no other channel traffic is passed. chances are, you don't need to terminate orphan channels within 5 seconds of the loss of the socket so a larger value is perhaps more useful. That said, HBINT(5) would cause zero extra traffic in a system with a sustained message rate of 1/second - until the app died, in which case the orphan socket would be killed pretty quick.
For more detail, please go to the SupportPacs page and look for the Morag's "Keeping Channels Running" presentation.

Resources