I want to setup a network of brokers because I need to serve a number of users at the same time. I discovered that I may use either embedded brokers and start every broker in a Java Code or download full apache activemq distribution and run multiple instances.
For the moment, I don't have any specific reason to use embedded brokers. But on the other hand I don't have any reasons against using embedded brokers. Could you please give a hint what may be real disadvantages of using embedded brokers?
Thanks, Cheers
You probably want stand-alone broker(s).
Embedded brokers (in an application context) are usually used inside an application server to provide for fast response/low latency to application code in the running application server. The embedded broker would then store-and-forward the messages to another broker or interested clients. Other use cases including using embedded broker in unit tests, or within an embedded IOT-style computer.
Related
We are currently writing a library that consumes rabbitmq events with spring-amqp.
This library needs to be used from some applications that themselves use rabbitmq with spring-amqp.
Is it possible to isolate the separate RabbitMQ Configurations from each other, so that the configurations form within the library dont interfere with the existing ones in the applications?
both would connect to the same rabbitmq cluster.
I looked through the documentation of spring-amqp but only found a way to split the rabbit configuration for consuming and producing events.
Since spring-amqp 2.3 there's a Multiple Broker (or Cluster) Support which could be used to create multiple connections to the same broker. You can find a sample config at this link.
Also, you can take a look at the spring-multirabbit library (https://github.com/freenowtech/spring-multirabbit) which is actually the ancestor to that feature in spring-amqp and can be used to add multiple RabbitMq connections support to a service that already has a Spring-configured connection in a non-intrusive way.
I designed a component that collects application runtime data, which is sent to the analysis server via Kafka. In most cases, apps will integrate Kafka. In order to avoid connect to same Kafka twice, I need to determine whether the app uses Kafka. If the app uses Kafka, I directly reuse the connections.
So, how can I predict app uses kafka ?
And if app integrate spring-kafka, what should I do ?
avoid connect to same Kafka twice,
There should be no issue with this. Clients are lightweight enough to create more than one of them in one app, from one machine, etc
determine whether the app uses Kafka.
Beyond decompilation, if it's a fat jar, you can jar -tf app.jar | grep -i Kafka, however this would only tell you there's files with the word "Kafka" in the package, not necessarily that any Apache Kafka clients are in use
We are designing a solution that will consume messages from IBM MQ using JMS. The plan is to use WAS Liberty, so JMS is the technology of choice. We will create Message-Drive beans that will listen for messages in MQ queues.
We are considering both WAS Liberty and OpenLiberty as well.
The trick here is that we must implement it with fail-over, so that if one of our server fail, the other will keep consuming messages from MQ automatically. Like in a ative/passive mechanism.
I'm aware that the MQ adapter needs to be installed as it is not provided out-of-the-box.
I have the following questions:
Does WAS Liberty messaging implementation supports fail-over? Meaning that if the ative message consumer node fails, the stand-by node will automatically migrate and start consuming messages from MQ? What about OpenLiberty?
How can I configure the message system to work that way? Can you point out to the documentation?
Or is this feature only provided by WebSphere?
There is no such functionality in WebSphere Liberty or Open Liberty yet. You can create RFE here https://www.ibm.com/developerworks/rfe/?PROD_ID=544 .
There are ways to do it manually, check these links:
JMS Activation spec on Liberty: “WAS_EndpointInitialState” full profile equivalent property?
Controlling the state of endpoints at runtime
Solution that you could do:
create a script/application that will monitor your servers and call that API to enable/disable endpoint in specific server
or use Dynamic cluster/ auto scaling feature of Liberty and divide you app to two clusters - one with MDBs, one without. And then define policy that MDBs cluster has 1 instance always available. So once the server dies it is automatically restarted somewhere in the cluster
or use Kubernetes/ICP platform in the same way - so deploying 2 versions of app, and defining different replicasets parameters.
our needs for a queuing solution are fairly simple, a producer needs to put things in a persistent queue and these need to be handled by a consumer. The queuing systems needs to be integrated within a Spring application and distributed on multiple tomcat hosts.
When reading through questions i see a lot of people that warn about using ActiveMQ with Spring for example so i am wondering what the alternatives are when taking simplicity, scalability and performance in mind when combined with a Spring based application.
If you are already using Sping, then integrating ActiveMQ with it is fairly easy. The simplest solution would be to run ActiceMQ standalone and have your Tomcat applications simply communicate with it using Spring JMS (or AMQ client APIs)...
Another option is to use Apache Camel. It has great ActiveMQ support, can work with an external or embedded broker, adds many messaging/routing features and can be deployed standlone, in ActiveMQ or in Tomcat easily...good luck
Want to know how to create physical queue in JMS at run time.
when I search for this I got Creating JMS Queues at runtime
But when I read http://activemq.apache.org/how-do-i-create-new-destinations.html I come to know queue which mention in Creating JMS Queues at runtime is not creating any physical queue at server side.
Please correct me if I m wrong. If any one know to create physical queue at run time please replay.
Thanks in advance.
The creation of "normal" queues is not adressed by the JMS standard. Depending on what you want to do there are two approaches:
use temporary queues -> however they have many restrictions, most commonly they are used forrequest-reply scenarios
use the API of the JMS provider - however your solution will be depending on this specific provider then
The JMS standard only addresses sending and receiving data from objects like queues and topics. Creation of JMS artefacts is vendor specific and most often requires using:
1)specific vendor APIs (not JMS)
2)command/admin messages aimed at the JMS server (command agents on activemq)
3)JMX API
I have used JMX method, which is the most powerful, but also the most work.
JMX Method for activemq (version 5.0+)
a) JMS Server Setup
1) Enable JMX in activemq startup scripts and activemq.xml files
2) If you are authenticating to to the server, make sure your user has admin privileges setup in activemq.xml (see http://activemq.apache.org/security.html)
3)restart activemq server
b) Your Client Code
1) create an instance of org.apache.activemq.broker.jmx.BrokerViewMBean (you will need to connect with some JMX connectivity code which is a bit messy)
2) use its addQueue method. This will create a queue on the server
(The process is similar for hornetq but since you mentioned Activemq I have omitted hornetq details here.)
I have used this method myself and it works.
An alternative is to use Command Agents in Activemq, but I have no personal experience with these. These are special messages contain admin commands and may do what you want as well.