ActiveMQ dynamic discovery does not work in my prototype which has this objective:
A JMS client app (message producer) which load balances request messages to multiple (2) JMS consumer
apps. There are 3 Amazon T2-micro EC2 instances running for this prototype -
each instance runs activemq 5.14.4. The load balancing is
achieved through a network of brokers which is created by static discovery
network connectors configured on the activemq client instance activemq.xml file as:
<networkConnector
name="frontEnd->WMR1"
uri="static:(tcp://<publicip>:61616)"
duplex="true"
decreaseNetworkConsumerPriority="true"
networkTTL="2"
dynamicOnly="true">
<excludedDestinations>
<topic physicalName=">" />
</excludedDestinations>
</networkConnector>
<networkConnector
name="frontEnd->WMR2"
uri="static:(tcp://<publicip>:61616)"
duplex="true"
decreaseNetworkConsumerPriority="true"
networkTTL="2"
dynamicOnly="true">
<excludedDestinations>
<topic physicalName=">" />
</excludedDestinations>
</networkConnector>
The prototype with static discovery works perfectly and load balances any number of JMS-client messages
to the 2 JMS-consumer applications.
However, I need to enhance the prototype to use dynamic (multicast) discovery to produce the network of brokers. So I tried:
`<networkConnectors>
<networkConnector uri="multicast://default"/>
</networkConnectors>
<transportConnectors>
<transportConnector uri="tcp://localhost:0" discoveryUri="multicast://default"/>
</transportConnectors>`
as directed in the documentation but the dynamic discovery does not work. The transport and network connectors
are created ok (I can see them on
the activemq admin console) but they are empty - no message brokers were discovered via dynamic discovery.
I researched this problem exhaustively and at one point I found a post which suggested that
the problem might be in the content of /etc/hosts which is:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost6 localhost6.localdomain6
I need some help getting activemq dynamic discovery to work on EC2 Amazon T2-micro instances.
Q. Does Amazon VPC support multicast or broadcast?
No.
https://aws.amazon.com/vpc/faqs/
The same is true for EC2 Classic. The networks are not Ethernet, they're a software-defined emulation of Ethernet, allowing for far better scalability and security than would be practical with native Ethernet.
You can build an overlay but there's little point in that for discovery purposes, since an overlay would require static configuration.
Related
We are developing an application using Spring Boot and Apache Camel that is reading a message from ActiveMQ Artemis, doing some transformation, and sending it to ActiveMQ Artemis. Our application is deployed as war file in on-premise JBoss EAP 7.2.0. Both the source and target applications are remote to our application and they are also deployed on JBoss EAP 7.2.0. The remote queues to which Camel is connecting are ActiveMQ Artemis which were created in JBoss and connecting using http-remoting protocol. These setup was working when there were only one node of each of the applications.
Now we are making the source and target applications 3 nodes each (i.e. they will be deployed in multiple JBoss servers). For accessing the front-end of the source and target applications we are configuring and accessing them through a load balancer.
Can we configure the load balancer to access the source and target brokers from the Camel layer? There will be 3 source and 3 target brokers. Or is clustering the brokers the only option in this case?
We are thinking of load balancing between the queues and not clustering. Suppose we have three queues q1, q2, and q3 with corresponding brokers b1, b2, and b3. I will configure the load balancer url in the Camel layer like http-remoting://<load-balancer-url>:<port> (much like we do while load balancing HTTP API requests). Any message coming in will hit the load balancer, and the load balancer will decide which queue to route the message to.
JMS connections are stateful. When a client creates a connection there is no indication of the queues to which it will send messages. The load-balancer will have to direct that client's connection to either b1, b2, or b3 and it will have no way to determine where it should go. A load-balancer working with messaging will almost certainly only be able to balance connections, not messages. It sounds like you want load-balancing at the message level instead. Perhaps you should look into something like Qpid Dispatch Router.
Messaging doesn't use HTTP so using an HTTP load balancer like you do with your HTTP API(s) won't work. It's easy for a load-balancer to inspect HTTP headers and route requests, especially since HTTP is stateless. However, messaging connections are stateful and the protocols are typically quite a bit more complex than HTTP. I don't know of any load-balancers that will work the way you are wanting for messaging.
You need your client not to use the topology, you can do this by using "setUseTopologyForLoadBalancing" on your AMQConnectionFactory. If you get the connection factory from EAP I think this is configurable on the connection factory since EAP 7.3.
I'm looking for the easiest way to build a Wildfly cluster with JMS load balancing for a development platform. Messages will be produced by the Wildfly servers themselves.
I wonder how works the ActiveMQ Artemis JMS server embedded in Wildfly in a cluster deployment. I see on this site that a Wildfly node can declare its JMS server as master or slave.
I also read here that a MDB can use an "in-vm-connector" connector.
I'm not sure that I understand how a JMS cluster works with a master and a slave JMS server with "in-vm-connector". Will the MDB instances in the Wildfly node with the slave JMS server receive messages? Will the JMS cluster provide load balancing or will there be only one active JMS server at the same time?
In ActiveMQ Artemis (i.e. the JMS broker embedded into WildFly) clustering (which provides things like message load balancing) and high-availability (which provides redundancy for the integrity of the message data) are separate concepts. The master/slave configuration you mentioned is for high-availability. This configuration doesn't provide message load balancing since only one of the brokers is alive at any given point in time.
If you want configure a master/slave pair it's recommended that you separate those servers from the servers that actually process the messages since it doesn't make sense to have MDBs running on a server which doesn't have a live broker (i.e. a slave) since they won't receive any messages.
We are using the Spring JmsTemplate implementation with a CachingConnectionFactory. We have configured the connection with a failover-url:
failover:(ssl://172.16.0.11:61616,ssl://172.16.0.12:61616)?maxReconnectDelay=2000
On the transport connector in ActiveMQ we have enabled the option "rebalanceClusterClients":
<transportConnector name="openwire" uri="ssl://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600" rebalanceClusterClients="true">
<publishedAddressPolicy>
<publishedAddressPolicy publishedHostStrategy="IPADDRESS" />
</publishedAddressPolicy>
</transportConnector>
However, all of the clients are connecting to the first broker in the list of brokers instead of some of them being rebalanced to the second broker.
Previously we did not use the Spring JMS implementation, but instead we used the ActiveMQ libraries directly. This implementation did allow rebalancing the connected clients.
Is something in Spring preventing the rebalancing? Perhaps the CachingConnectionFactory?
EDIT 2019-07-10
I've found these two (p1 and p2) posts on SO where it is stated that CachingConnectionFactory doesn't play nicely with the failover-protocol. However, I think this has been resolved since then as we do see the connections moving between brokers if a broker is turned off.
What we do not see is connections being balanced across brokers. We did see this behavior when we were still using our own custom JMS implementation. So might it be something in Spring or the JmsTemplate?
The actual problem was not ActiveMQ or Spring rather an external firewall prevented this from working.
Our JMS Listener application connects to an ActiveMQ network of brokers through a load balancer, which we are told distributes connections amongst brokers in a round-robin fashion. Our spring boot application is creating a connection via the load balancer, which in turn feeds the connection to one of the brokers amongst the network of brokers. If a message is published to the brokers then it would be a lot quicker if the message was on the broker that the JMS listener connection lived on. However, the likelihood of that occurring is slim unless we can distribute the connections across the brokers.
I've tried increasing the concurrency in the DefaultJmsListenerContainerFactory, but that didn't do the trick. I was thinking about somehow extending the AbstractJmsListenerContainerFactory, and somehow create a Map of DefaultMessageListenerContainer instances but it looks like the createListenerContainer will only return an instance of whatever is parameterized in the AbstractJmsListenerContainerFactory and we cannot parameterize it with an instance of Map.
We are using Spring Boot 1.5.14.RELEASE.
== UPDATE ==
I've been playing around with the classes above, and it seems like it is inherent in Spring JMS that a Jms Listener be associated with a Single Message Listener Container, which in turn is associated with a single (potentially shared) connection.
For any folks that have JMS Application Listeners that are connecting to a load balanced network of brokers, are you creating a single connection that is connecting to a single broker, and if so, do you experience significant performance degradation as a result of the network of brokers having to move any inbound messages to a broker with consumers?
I use DOSGi to connect two OSGi components (iPOJO components) over local network.
I configured it with either SOAP or RESTful-JAX RS. However, both use TCP for communication (i saw this in Wireshark).
Now, i would like to configure SOAP or RESTful-JAX RS with UDP. How can i do that?
Thank you for your help.
Assuming this is Apache CXF DOSGI implementation: Given how CXF can use UDP as a transport, it looks simple enough to use a udp URL as your "org.apache.cxf.ws.address" when creating your distributed service.
Thank you very much for your response.
I implemented an application including a server component and a client component
as indicated by
Using Distributed Services with iPOJO.
However, it uses TCP for client-server communication
I tried to declare the server with the "org.apache.cxf.ws.address" property with UDP as "udp://localhost:9090/service".
Example:
<property name="service.exported.interfaces" value="*" />
<property name="service.exported.configs" value="org.apache.cxf.ws" />
<property name="org.apache.cxf.ws.address" value="udp://localhost:9090/service" />
However, i received an error:
Unknown protocol: udp
I'm using the package cxf-dosgi-ri-singlebundle-distribution-1.1.jar for client-server communication
Could you please give me some advices?