ActiveMQ Artemis: Can slave instance be a connection point for the clients? - jms

The ActiveMQ Artemis documentation says:
Slave will be in passive mode until the master crashes...
That's OK, but it's not clear if brokers in passive mode can be a connection point. In other words, can I put my slave in the connection list for a remote client like below?
(tcp://my-master:61616,tcp://my-slave:61616)?reconnectAttempts=5
If yes, does it means that a broker in passive mode is just a router?

The ActiveMQ Artemis JMS client supports a list of servers to be used for the initial connection attempt. It can be specified in the connection URI using a syntax with (), e.g.: (tcp://myhost:61616,tcp://myhost2:61616)?reconnectAttempts=5. The client uses this list of servers to create the first connection.
The slave broker doesn't accept incoming client connections until it becomes live, but it is important to include both the master and the slave uri in the list because the client can't know who is live when it creates the initial connection.

Related

Spring websockets + Amazon MQ limitations

We want to use spring websockets + STOMP + amazon MQ as a full featured message broker. We were trying to do benchmarking, to find out how many client websocket connections single tomcat node can handle. But it appears that we hit amazonMQ connection limit first. As per the aws documentation, amazonMQ has a limit of 1000 connections per node (as far as I understand we can ask support to increase the limit, but I doubt that it can be increased big time). So my questions is:
1) Am I correct in assuming that for every websocket connection from client to spring/tomcat server, a corresponding connection being opened from server to broker? Is this correct behavior or we're doning something wrong here/missing something?
2) What can be done here? I mean I don't think this is a good idea to create broker node per evry 1000 users..
According to https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/messaging/simp/stomp/StompBrokerRelayMessageHandler.html your are doing everything right, and it is documented behavior.
Quote from javadoc:
For each new CONNECT message, an independent TCP connection to the broker is opened and used exclusively for all messages from the client that originated the CONNECT message. Messages from the same client are identified through the session id message header. Reversely, when the STOMP broker sends messages back on the TCP connection, those messages are enriched with the session id of the client and sent back downstream through the MessageChannel provided to the constructor.
As for a fix, you can write your own message broker relay, with tcp connection pooling.

How MQ Client like Java Client listen messages from MQ Server running ServerConn Channel

I am looking for detailed description on how IBM MQ Client or listener get the messages from MQ Server when new messages are placed on MQ Queue or Topic.
How the connection between MQ Client and MQ Server are created?
Does MQ Client initiates the connection with Server or does server initiates the connection to its consumer?
In case we have connection pool defined on MQ Client, how client knows that it has to create more connections with Server as the messages are increasing on Server? How Client know about the messages on Server?
Is there a communication from Server to Client which tells client that new messages have arrived?
I am looking for these details not the details on how this is setup or how to setup MQ Channels or Listeners. I am looking for how it works behind the scene.
If someone can point me to the right direction or documentation, it would be great.
It's hard to speak definitively about how the IBM WebSphereMQ client & server work since they're closed source, but based on my experience with other messaging implementations I can provide a general explanation.
A JMS connection is initiated by a client to a server. A JMS client uses a javax.jms.ConnectionFactory to create a javax.jms.Connection which is the connection between the client and server.
Typically when a client uses a pool the pool is either filled "eagerly" (which means a certain number of connections are created when the pool is initialized to fill it to a certain level) or "lazily" (which means the pool is filled with connections one-by-one as clients request them from the pool). If a client requests a connection from the pool and if all the connections in the pool are being used and the maximum size of connections allowed by the pool hasn't been reached then another connection will be created. If the pool has reached its maximum allowed size (i.e no more connections can be created) then the client requesting a connection will have to wait for another client to return their connection to the pool at which point the pool will then give it to the waiting client.
A JMS client can find out about messages on the server in a few different ways.
If the JMS client wants to ask the server occasionally about the messages it has on a particular queue it can create a javax.jms.Consumer and use the receive() method. This method can wait forever for a message to arrive on the queue or it can take a timeout parameter so that if a message doesn't arrive in the specified timeout the call to receive() will return.
If the JMS client wants to receive a message from a particular queue as soon as the message arrives on the queue then it can create a javax.jms.MessageListener implementation and register it on the queue. When such a listener is registered on a queue then when a message arrives on the queue the server will send the message to the listener. This is sometimes referred to as a "callback" since the server is "calling back" to the client.
The first thing you should do is take a course on JMS/IBM MQ or go to the new IBM conference called: Integration Technical Conference
Ok, now to your questions:
How the connection between MQ Client and MQ Server are created?
You simply issue the createQueueConnection method of QueueConnectionFactory class and specify the credentials.
QueueConnection conn = cf.createQueueConnection("myUserId", "myPwd");
Does MQ Client initiates the connection with Server or does server initiates the connection to its consumer?
MQ client application starts the connection - always.
In case we have connection pool defined on MQ Client, how client knows that it has to create more connections with Server as the messages are increasing on Server? How Client know about the messages on Server?
It is up to the team's architect or lead developer to understand message flow and message patterns. Hence, they will know what to set the pool count at. Also, lots and lots of testing too. Some client applications will only require a pool count of 10 whereas other applications may need a pool count of 50 because it is a heavy flow.
Is there a communication from Server to Client which tells client that new messages have arrived?
You use the createReceiver method of the QueueSession class to retrieve a message. Set a timeout value for the createReceiver method rather than continuously polling the queue manager.
Again, some training on the use of JMS/IBM MQ is strongly recommended.

Kafka server configuration - listeners vs. advertised.listeners

To get Kafka running, you need to set some properties in config/server.properties file. There are two settings I don't understand.
Can somebody explain the difference between listeners and advertised.listeners property?
The documentation says:
listeners: The address the socket server listens on.
and
advertised.listeners:
Hostname and port the broker will advertise to producers and consumers.
When do I have to use which setting?
listeners is what the broker will use to create server sockets.
advertised.listeners is what clients will use to connect to the brokers.
The two settings can be different if you have a "complex" network setup (with things like public and private subnets and routing in between).
Since I cannot comment yet I will post this as an "answer", adding on to M.Situations answer.
Within the same document he links there is this blurb about which listener is used by a KAFKA client (https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic):
As stated previously, clients never see listener names and will make metadata requests exactly as before. The difference is that the list of endpoints they get back is restricted to the listener name of the endpoint where they made the request.
This is important as depending on what URL you use in your bootstrap.servers config that will be the URL* that the client will get back if it is mapped in advertised.listeners (do not know what the behavior is if the listener does not exist).
Also note this:
The exception is ZooKeeper-based consumers. These consumers retrieve the broker registration information directly from ZooKeeper and will choose the first listener with PLAINTEXT as the security protocol (the only security protocol they support).
As an example broker config (for all brokers in cluster):
advertised.listeners=EXTERNAL://XXXXX.compute-1.amazonaws.com:9990,INTERNAL://ip-XXXXX.ec2.internal:9993
inter.broker.listener.name=INTERNAL
listener.security.protocol.map=EXTERNAL:SSL,INTERNAL:PLAINTEXT
If the client uses XXXXX.compute-1.amazonaws.com:9990 to connect, the metadata fetch will go to that broker. However, the returning URL to use with the Group Coordinator or Leader could be 123.compute-1.amazonaws.com:9990* (a different machine!). This means that the match is done on the listener name as advertised by KIP-103 irrespective of the actual URL (node).
Since the protocol map for EXTERNAL is SSL this would force you to use an SSL keystore to connect.
If on the other hand you are within AWS lets say, you can then issue ip-XXXXX.ec2.internal:9993 and the corresponding connection would be plaintext as per the protocol map.
This is especially needed in IaaS where in my case brokers and consumers live on AWS, whereas my producer lives on a client site, thus needing different security protocols and listeners.
EDIT:
Also adding Inbound Rules is much easier now that you have different ports for different clients (brokers, producers, consumers).
EDIT2:
This article is a great in depth guide if the above is still not clear: https://rmoff.net/2018/08/02/kafka-listeners-explained/
There's so much confusion or little information in answers provided here for the question. So posting my elaborate answer for clarity.
listeners - Used by the embedded jetty web server in kafka to bind to. This jetty web server is used to provide REST API that provides the control plane for Kafka Connect workers. The hostname in this setting can be left empty if you want kafka to bind to localhost (it does by calling InetAddress.getLocalHost().getCanonicalHostName() java api)
advertised.listeners: This address is published to zookeeper by every kafka broker. If this setting is not set, then value of listeners will be used here and published to zookeeper. That's the only purpose of this setting for notifying others. Kafka Clients use the 'advertised.listeners' setting published to zookeeper (as /brokers/ids/<id>/ # endpoints) to talk to Kafka broker.
Now the question is why to have two setting? Why not a single setting? Let's say your kafka broker is sitting behind a proxy. And all the kafka clients have to talk to the proxy to reach the broker. In this case, we want kafka's embedded jetty server to bind to localhost and local port, but we can't publish this to zookeeper as clients can't use it. So kafka admin can set the setting advertised.listeners to the proxy host and port.
Also, in some of our production hosts, InetAddress.getLocalHost().getCanonicalHostName() returns empty and so listeners setting's hostname was empty which was fine for jetty to bind. But advertised.listeners was published to zookeeper as NULL:9092 since it took the same value as listeners by default. Now all the brokers tried to publish in this way to zookeeper and so brokers got the error java.lang.IllegalArgumentException: requirement failed: Configured end points null:14092 as advertised.listeners as NULL:9092 is already registered by broker 101. The fix was to change the advertised.listeners setting to have hostname in it.
Listeners are all the addresses the Kafka broker listens on (it can be more than 1 address) whereas advertised listeners are the addresses other agents (producers, consumers, or brokers) need to connect to if they want to talk to the current broker.
The 2 lists should be the same if all are running on the same machine (can connect using localhost:9092 or 127.0.0.1:9092) but if consumers, producers, or other brokers do not stay on the same machine or Docker instance, they must use different addresses (that's why we have advertised listeners). Two examples:
Saying we use Docker to run 2 Kafka instances named kafka and kafka2. kafka2 for sure cannot connect to kafka using localhost:29092. It must use kafka:9092 instead. So for kafka, listener = localhost:29092, advertised listener = kafka:9092
Producer from host machine cannot connect to kafka using kafka:9092. It must use localhost:29092 instead.
Let use the following docker-compose config to understand more about the startup process of a Kafka broker:
# config/docker-compose.yml
kafka:
image: docker.io/bitnami/kafka:3
ports:
- "29092:29092"
- "9092:9092"
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:29092
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://localhost:29092
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper
With this config, Docker will start 1 Kafka broker instance which listens on 2 ports:
9092 with name CLIENT
29092 with name EXTERNAL
The broker then connects to Zookeeper at zookeeper:2181 and registers its 2 addresses: kafka:9092 and localhost:29092. Also, with KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT, it wants Zookeeper to tell other brokers to connect to kafka:9092 if want to talk to it.
But why need 2 ports? Read more here
References:
My notes while learning Kafka
Kafka Listeners – Explained
From this link: https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic
During the 0.9.0.0 release cycle, support for multiple listeners per
broker was introduced. Each listener is associated with a security
protocol, ip/host and port. When combined with the advertised
listeners mechanism, there is a fair amount of flexibility with one
limitation: at most one listener per security protocol in each of the
two configs (listeners and advertised.listeners).
In some environments, one may want to differentiate between external
clients, internal clients and replication traffic independently of the
security protocol for cost, performance and security reasons. A few
examples that illustrate this:
Replication traffic is assigned to a separate network interface so that it does not interfere with client traffic.
External traffic goes through a proxy/load-balancer (security, flexibility) while internal traffic hits the brokers directly
(performance, cost).
Different security settings for external versus internal traffic even though the security protocol is the same (e.g. different set of
enabled SASL mechanisms, authentication servers, different keystores,
etc.)
As such, we propose that Kafka brokers should be able to define
multiple listeners for the same security protocol for binding (i.e.
listeners) and sharing (i.e. advertised.listeners) so that internal,
external and replication traffic can be separated if required.
So,
listeners - Comma-separated list of URIs we will listen on and their protocols.
Specify hostname as 0.0.0.0 to bind to all interfaces.
Leave hostname empty to bind to default interface.
Examples of legal listener lists:
PLAINTEXT://myhost:9092,TRACE://:9091
PLAINTEXT://0.0.0.0:9092, TRACE://localhost:9093
advertised.listeners - Listeners to publish to ZooKeeper for clients to use, if different than the listeners above.
In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used.

Load balancing in ActiveMQ network of brokers

We are using ActiveMQ and have defined a network of brokers (2 in our test setup). We have configured the brokers to accept AMQP connections and we have enable "updateClusterClients" and "rebalanceClusterClients" like so:
<transportConnector name="amqp" uri="amqp+ssl://0.0.0.0:5673?maximumConnections=1000&wireFormat.maxFrameSize=104857600&transport.transformer=jms" updateClusterClients="true" rebalanceClusterClients="true"/>
Furthermore, we have build our clients using Qpid JMS. The clients have been configured using a failover-url.
The clients can communicate with each other just fine. Also, when I stop one of the two brokers, they switch to the remaining broker.
However, when I restart the broker, I had expected to see some of the clients move to the new broker. Unfortunately, what I actually see is that they all stay connected to the same broker.
What might be the reason that they don't rebalance themselves?
Also, I would like the clients to spread over the two brokers when they connect initially. Is there a way how I can achieve this?
The ActiveMQ broker does not attempt to rebalance the AMQP clients in the Broker network. There could be a way to accomplish it but it makes some assumptions about the nature of every AMQP client that's connect such as the fact that they all support connection redirects which not all of them do.
Until a better mechanism is defined such that a client can connect and advertise that it is willing to be redirected I don't think you'll see ActiveMQ forcibly dropping connections on AMQP clients to rebalance them.
The broker is able to rebalance OpenWire clients because it knows that an OpenWire client connected with the failover transport will deal with the Connection Control command that requests the client to leave, where a client that is not connect with failover will ignore that command.

JMS with clustered nodes

I have two clustered managed servers running on Weblogic, and seperate JMS server1 and server2 are running on each managed server. The problem is in application properties file, we only hardcoded and pass JMS server1 JNDI name to the application. So both applications running on each node actually only uses one fixed JMS server, which is not truly distributed and clustered. If JMS server 1 is down, the whole application will be down.
My question is how to let application dynamically find JMS server in above senario? Can you please point me a direction? Thanks!
It's in the Weblogic docs at: http://docs.oracle.com/cd/E14571_01/web.1111/e13738/best_practice.htm#CACDDFJD
Basically you created a comma separated list of servers and the JMS connection logic should be automatically able to handle to case when one of the servers is down:
e.g.
t3://hostA:7001,hostB:7001
When you use a property like jms.jndi.provider.url=t3://hostA:31122,hostA:31124
it tells wls to connect to either hostA:31122 or hostA:31124.
Note your JMS client is connected to only one Host at any given time.
when you shutdown hostA the connection between JMS client and server is cut abruptly resulting in an exception, your code will have to handle this exception gracefully and attempt to connect to WLS again periodically to ensure it connects to hostB.
WLS internally will round robin the request if more than 1 instance of the JMS client is running.
When using MDB as JMS client and deploying it to a cluster and using such a url 1 mdb instance would connect to one host and the other instance would connect to another host. MDB also inherently has the ability to reconnect periodically to the JMS destination.
A easy solution to your problem could be to
1) Set the jms.jndi.provider.url=t3://hostA:31122,hostA:31124
2) Have 2 instance of the JMS client code running, so one will connect to port 31122 and other to 31124
3) Set Forward-Delay on the JMS Queue so that message dont remain in queue without getting consumed for long and get forwarded to the other queue which has an active consumer.
I am updating my progress here instead of adding more comments. I have tested using a standalone JMS client by changing properties file from t3://hostA:7001 to t3://hostA:7001,hostB:7001 for JMS provider. The failover is automatically handled by WLS. No code change. The exception I got above is caused by using wlclient.jar, it is working after it changed to wlfullclient.jar.
I followed this link to generate wlfullclient.jar.
Thanks everyone!

Resources