Triggering action(s) upon ClientConnectionFactory connection establishment/acceptance - spring

I am currently using a TcpNioClientConnectionFactory to establish a TCP Connection to a desired host. I need to be able to perform some actions once connection establishment/acceptance has been made with that host. The actions I need to perform involve sending a message to the desired host so I need to know the connection has been accepted at the socket level before doing so.
I currently have a Spring #EventListener configured to catch all TcpConnectionOpenEvent events. However, it does not seem that this event is being published at the time when the outbound TCP connection has been accepted by the host. This was to be expected given the name of the event but I did find it interesting that the connectionId of the event being published contained a host value of 'unknown'.
I am wondering if there is anyway for me to catch the acceptance of the TCP Connection from the destination host similar to the TcpConnectionOpenEvent and trigger actions as I need? Ideally I would be able to capture this acceptance in such a way that I have a complete connectionId for the newly established connection.

It's a bug since 5.2.x (regression when we added support for connection timeout). The event is now published before the connection is established.
The fix will be in tomorrow's releases; issue here.
Thanks for reporting.

Related

Can you check to see if an IBM MQ topic is up and available through a Java application before attempting to create a connection?

I would like to add some conditional logic to our Java application code for attempting to create a JMS Topic Connection. I have seen problems in the past stemming from attempting to create a connection when the MQ server had been restarted or was currently down. One improvement I added was to check for the quiescent state, and another was to increase the timer before attempting reconnection to our durable topic queue.
Is there a way to confirm with the MQ server/topic/channel that it is up and running and a connection request can safely be made?
The best way to confirm that a queue manager (and the channel you are using to connect to the queue manager) is up and running is to attempt to connect to it.
If your connection attempt fails, you will get an MQ Reason code telling you exactly why. This is a much better way to confirm than any administrative command, because it also confirms that your application, and it's security context is correct and able to connect to the queue manager. It is completely possible to have an up-and-running queue manager but an application that is not yet correctly configured to use it. So connect from the application and if it works, the queue manager is up-and-running.
Your comment about having an increased timer before attempting to reconnect after a failure is well made. It doesn't help anyone if you hammer the queue manager with lots of repeated and close together connection attempts until it is ready to accept your connection, but still anything that is going to test the availability of the queue manager needs to ultimately connect to it, so very simply, just connect.

Spring websockets + Amazon MQ limitations

We want to use spring websockets + STOMP + amazon MQ as a full featured message broker. We were trying to do benchmarking, to find out how many client websocket connections single tomcat node can handle. But it appears that we hit amazonMQ connection limit first. As per the aws documentation, amazonMQ has a limit of 1000 connections per node (as far as I understand we can ask support to increase the limit, but I doubt that it can be increased big time). So my questions is:
1) Am I correct in assuming that for every websocket connection from client to spring/tomcat server, a corresponding connection being opened from server to broker? Is this correct behavior or we're doning something wrong here/missing something?
2) What can be done here? I mean I don't think this is a good idea to create broker node per evry 1000 users..
According to https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/messaging/simp/stomp/StompBrokerRelayMessageHandler.html your are doing everything right, and it is documented behavior.
Quote from javadoc:
For each new CONNECT message, an independent TCP connection to the broker is opened and used exclusively for all messages from the client that originated the CONNECT message. Messages from the same client are identified through the session id message header. Reversely, when the STOMP broker sends messages back on the TCP connection, those messages are enriched with the session id of the client and sent back downstream through the MessageChannel provided to the constructor.
As for a fix, you can write your own message broker relay, with tcp connection pooling.

Determining if a remote subscriber is temporarily disconnected

This is in the context of reconnection. As a moderator I want to know when a remote subscriber (a client subscriber of the stream that I publish) drops temporarily its connection.
First, the subscriber events disconnect, reconnecting and reconnected are dispatched locally on the remote side, that is the guy that loses its connection. The publisher receives no events about the remote connection that gets lost.
Second, I tried to use the 'signal' JS web jdk as well as the server jdk. The idea was to get an error when sending a signal to a 'temporarily' disconnected client, but all I got is successful responses for both, the server I get 204 status code. So I understand the signals are queue and sent when the remote client gets reconnected.
So far I found no way to determine from a connected client when another client loses its connection 'temporarily entering in the 'reconnecting' state.
The signal API only sends error when the client gets completely disconnected from the session, not temporarily disconnected.
I know can mimic this with signals and timeouts, but this way I need the server to trust my client and I really need to validate this on the server side as well, otherwise it would be 'hackable'.
Thanks.
TokBox Developer Evangelist here.
When any client loses connection, a connectionDestroyed event is dispatched to all clients including the client that lost the connection. There is also a Connection Event object which includes a reason and a Connection property that is dispatched with the event.
The reason for the connection destroyed event varies depending on if the client called the disconnect method on the Session object, was force disconnected, or disconnected due to a network condition. In the context of a client temporarily losing connection, the connectionDestroyed event will fire with the reason property of the event data set to networkDisconnected.
As for the signal queue, by default, any signals you send while the client is temporarily disconnected from a session are queued and sent when (and if) it successfully reconnects. You can set the retryAfterReconnect property to false in the options you pass into the Session.signal() method to prevent signals from being queued while the client is disconnected.
For more information on Automatic Reconnection, please visit: https://tokbox.com/developer/guides/connect-session/js/#automatic_reconnection

Interpretation of SetWriteDeadline error

I am writing a websocket server in Go that broadcasts messages to clients. I use SetWriteDeadline on each send so that the broadcast loop doesn't get stuck.
My question is: how do I interpret an error from SetWriteDeadline? In particular, should I assume that there is something wrong with that particular client and unregister it? Or is it a server-side issue that happened to get triggered on this client?
After researching SetWriteDeadline, I found that the deadline is for putting the message on the TCP stack server-side, not for the client to receive the message. So perhaps a better way to phrase my question is this: is there a separate TCP stack for each websocket client (perhaps this is the thing that has size WriteBufferSize), or is this buffer shared between clients? In the former case it seems like I should unregister the client on a SetWriteDeadline error, but not in the latter case.
Websocket connections are independent of other websocket connections.
Websocket connections have an underlying network connection. These network connections are also independent of each other.
An error returned from SetWriteDeadline indicates a problem with that specific websocket connection or the websocket connection's underlying network connection.
Also note that Gorilla's SetWriteDeadline method never returns an error.

Connecting timeout for WebSockets

I can't find any information which specify the timeout for building the connection with a WebSocket server. This means, how long can the state "CONNECTING" be active before it changes to "CLOSED"? When the target host doesn't exist the state changes almost immediately, this is easy to find out. But what happens if the DNS lookup takes longer or the server is busy?
The same question arises for the "CLOSING" state when the connection goes through the closing handshake procedure. If the connection fails here, how long does it take until onClose() is called?
Are these two timeouts browser specific? If so, does anyone know any concrete numbers?

Resources