How to wait for websockets to close gracefully on application shutdown? - quarkus

We have the following issue:
Preconditions:
Quarkus application received WebSockets connections
Quarkus application is running in a pod (Kubernetes)
The WebSockets connection is going through layers of 2 load balancers (elb, nginx) but we made sure that we increased the idle connection timeout (300s)
Websocket server must wait for connection closes before the shutdown
The client for the server doesn't have a reconnect mechanism
we have a shutdown hook to wait for the WebSockets sessions closes
Scenario when the pod is about to close:
the application received a SIGTERM
Sometimes some connections (but not all) closes with Reason GOING_AWAY (indicates that an endpoint is "going away", such as a server going down or a browser having navigated away from a page)
The shutdown hook:
void onStop(#Observes final ShutdownEvent ev) {
long numActiveSessions = getActiveSessions();
while (numActiveSessions > 0) {
try {
TimeUnit.SECONDS.sleep(waitforfinished);
} catch (InterruptedException e) {
return;
}
numActiveSessions = getActiveSessions();
}
}
There is a better way to do this? Why we received the GOING_AWAY reason? Are quarkus closing the WebSockets when it received a SIGTERM?
Update:
example project: https://github.com/pedrolopix/quarkus-websockets-example

Related

Thread Kill Issue in Eclipse Jetty WebSocket client after error

QueuedThreadPool: WebSocketClient#122503328{STOPPING,8<=8<=200,i=5,q=7} Couldn't stop Thread[WebSocketClient#122503328-1556,5,main]
QueuedThreadPool: WebSocketClient#122503328{STOPPING,8<=8<=200,i=4,q=7} Couldn't stop Thread[WebSocketClient#122503328-1557,5,main]
QueuedThreadPool: WebSocketClient#122503328{STOPPING,8<=8<=200,i=4,q=7} Couldn't stop Thread[WebSocketClient#122503328-1560,5,main]
QueuedThreadPool: WebSocketClient#122503328{STOPPING,8<=8<=200,i=4,q=7} Couldn't stop Thread[WebSocketClient#122503328-1561,5,main]
QueuedThreadPool: WebSocketClient#122503328{STOPPING,8<=8<=200,i=4,q=7} Couldn't stop Thread[WebSocketClient#122503328-1563,5,main]
The above warning log is seen repeatedly and system performance is impacted when we try to stop the client using WebSocketClient stop() method. The stop timeout is set to 0.
This is occurring when server application on 3rd party machine is down and connection is refused since no server is listening on destination port. Connect Exception is seen in onError callback.
This warning is seen even if disconnect and close are done from a different thread than the client thread.
That's because you are using a Thread from the ThreadPool to attempt to stop that same thread pool.
Don't call .stop() from a Jetty thread.
Schedule the stop on a new thread of your own making.

Apache Artemis doesn't stop scanning for expires

I'm using Apache Artemis ActiveMQ 2.6.3 as an MQTT broker embedded in a Spring 5 application:
#Bean(initMethod = "start", destroyMethod = "stop")
fun embeddedActiveMQ(securityManager: ActiveMQJAASSecurityManager) =
EmbeddedActiveMQ().apply {
setConfiguration(getEmbeddedActiveMQConfiguration())
setConfigResourcePath("activemq-broker.xml")
setSecurityManager(securityManager)
}
private fun getEmbeddedActiveMQConfiguration() =
ConfigurationImpl().apply {
addAcceptorConfiguration("netty", DefaultConnectionProperties.DEFAULT_BROKER_URL)
addAcceptorConfiguration("mqtt", "tcp://$host:$mqttPort?protocols=MQTT")
name = brokerName
bindingsDirectory = "$dataDir${File.separator}bindings"
journalDirectory = "$dataDir${File.separator}journal"
pagingDirectory = "$dataDir${File.separator}paging"
largeMessagesDirectory = "$dataDir${File.separator}largemessages"
isPersistenceEnabled = persistence
connectionTTLOverride = 60000
}
Although I'm setting the connection TTL to 60 seconds in the above Kotlin code as suggested in the documentation and the client disconnected and terminated an hour ago, the log shows the following entries:
2020-06-22 10:57:03,890 [Thread-29 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#ade4717)] DEBUG o.a.a.a.core.server.impl.QueueImpl - Scanning for expires on client1.some-topic
2020-06-22 10:58:03,889 [Thread-35 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#ade4717)] DEBUG o.a.a.a.core.server.impl.QueueImpl - Scanning for expires on client1.some-topic
Based on these log entries, I'm afraid that "dead" connection resources are never cleaned up by the server.
What should I do to actually remove the "dead" connections from the server to avoid leaking resources?
The broker will often create resources like addresses, queues, etc. to deal with clients. In the case of MQTT clients the broker will create queues which essentially represent the client's subscriptions.
In this particular case a queue named client1.some-topic has been created for an MQTT subscription and the broker is scanning that queue for expired messages. At this point it looks like the broker is working as designed.
When a client disconnects without unsubscribing what the broker does with the subscription depends on whether the client used a clean session or not.
If the client used a clean session then the broker will delete the subscription queue when the client disconnects (even in the event of a failure).
Otherwise the broker is obliged to hold on to the subscription queue and route messages to it. If the client never reconnects to unsubscribe then the subscription may fill up with lots of messages and trigger the broker's paging mode and eventually even limit message production altogether. In this case the client can either reconnect and unsubscribe or the subscription queue can be removed administratively.

IBM XMS.Net Listener Hangs Out when connection is closed forbiddenly

Hi I use webpshere mq client 8.0.0.8 and I set my listener at once and start to listen mq,but something went wrong and and myconnection is closed but it throws no error and mylistener hang out and cant listen message.If its throws error I have a mechanism to catch and restart it but that time I failed.Is there any property set to avoid this issue?
I have exceptionlistener and reconnect options in my connectionproperties.But this is not directly work,I have a autoresetevent (receiveCompleteEvent)mechanism,only solution I can find set signal in exceptionlistener,and kill connection.In exception listener I can log connection error notifications but no automatic connection set,
connectionfactory.SetIntProperty(IBM.XMS.XMSC.WMQ_CLIENT_RECONNECT_OPTIONS, IBM.XMS.XMSC.WMQ_CLIENT_RECONNECT);
connectionfactory.SetIntProperty(IBM.XMS.XMSC.WMQ_CLIENT_RECONNECT_TIMEOUT, 150);
private void OnException(Exception ex)
{
QueueStatuslog.Error(String.Format("Unexpected error occured to connection:{0}", ex.ToString()));
try
{
if (receiveCompleteEvent != null)
{
QueueStatuslog.Error(String.Format("Due to connection error send stop signal:{0}", ex.ToString()));
receiveCompleteEvent.Set();
}
Exception, like connection related, will be thrown to application when application makes synchronous MQ API call, like consumer.receive or producer.send. If you are using message listener to receive messages, the message delivery is an asynchronous operation and messages are delivered on the message listener thread. So XMS can not throw exceptions on that thread. Hence it requires another thread, i.e. ExceptionListener to let the application know about any connection related issues.
You will need to setup ExceptionListener on connection and catch any exception thrown. When an exception is thrown, issue Connection.Stop, clean up and reinitialize message receive.
You can also look at using automatic client reconnection and this.

Unable to reconnect SignalR JS client after recycle of application pool

When i recycle my Application Pool for the site where the SignalR hub is running, the javascript clients is unable to reconnect. But everything is OK if the client do a refresh on his browser.
In the clients console log, these lines repeat multiple times every second after a reset of the app pool: ( i have replaced the connection token with abcd )
LOGG: [15:51:19 UTC+0200] SignalR: Raising the reconnect event
LOGG: [15:51:19 UTC+0200] SignalR: An error occurred using longPolling. Status = parsererror. undefined
LOGG: [15:51:19 UTC+0200] SignalR: SignalR: Initializing long polling connection with server.
LOGG: [15:51:19 UTC+0200] SignalR: Attempting to connect to 'http://lab/signalr/reconnect?transport=longPolling&connectionToken=abcd' using longPolling.
LOGG: [15:51:19 UTC+0200] SignalR: Raising the reconnect event
I have tried disabling all authentication on the hub, but still the same result.
Both the server and client is running on SignalR v1.0.1
The hubconnection on the client is set up like this:
var connection = $.hubConnection('http://lab:8097', { logging: true });
var proxy = connection.createHubProxy('task');
connection.start({ jsonp: true }).done(function () {
proxy.invoke('OpenTask', id);
});
Im also using crossdomain on the server side hub registration:
RouteTable.Routes.MapHubs(new HubConfiguration { EnableCrossDomain = true });
The server is running on IIS 7.5, and the client is IE9.
Anyone have an idea what's wrong?
This issue will be resolved in 1.1 RTW (not released yet, currently only the beta is out).
For your reference here's the fix: https://github.com/SignalR/SignalR/issues/1809. If you'd like to have the fix earlier you can implement the changes noted in the issue.
Lastly, if you do choose to implement the fix you will need to handle the .disconnected event on the connection and restart the connection entirely.

handle over exception in JMS operations

I am using activeMQ 5.4 for JMS implementation in my project.
I want to notify user every time application fails to send a message
but I am not getting any handle at occurrence of an exception.
For instance, In case JMS broker is down and I perform a create connection, session etc. or a message send operation within a try block,
control never comes back to catch block or ExceptionListener set for the connection.
My code looks like:
connection = (TopicConnection) (new ActiveMQConnectionFactory(localBrokerURL)).createConnection();
connection.setExceptionListener(new ExceptionListener() {
#Override
public void onException(JMSException arg0) {
System.out.println("Exception Occurred");
}
});
connection.start();
final TopicSession session =conn.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);
final Topic sendTopic = session.createTopic("someTopicName");
where Value of localBrokerURL is
failover://(ssl://localhost:61618?trace=true&wireFormat.maxInactivityDuration=0)
Please help. Any hint is highly appreciated.
that is the intent of the failover: transport. It will hide transport failures and automatically try and reconnect, replaying any jms state when a new connection is established.
There is a transportListener that you can use to get transport suspended and resumed events.
If you remove the failover: component from the broker url, you will get all exceptions propagated up to your client.
just set the timeout property on the connection URI and it will throw an error instead of block the thread...
failover://(ssl://localhost:61618?trace=true&wireFormat.maxInactivityDuration=0)?timeout=10000
see http://activemq.apache.org/failover-transport-reference.html

Resources