TLDR: Can't seem to pass messages from one RabbitMQ VHost to another RabbitMQ VHost.
I'm having an issue with Spring Cloud Dataflow where it appears that despite specifying different RabbitMQ VHosts for source and sink, they don't ever get to the destination Exchange.
My dataflow stream looks like this: RabbitMQ Source | CustomProcessor | RabbitMQ Sink
RabbitMQ Source reads from a queue on vHostA and RabbitMQ Sink should output to ExchangeBlah on vHostB.
However, no messages end up on ExchangeBlah on vHostB, and I get errors in the RabbitMQ Sink log saying:
Channel shutdown: channel error; protocol method: 'method(reply-code=404, reply-text=NOT_FOUND - no exchange 'ExchangeBlah' in vhost 'vHostA', class-id=60, method-id=40)
I've got a feeling that this might be related to the Spring environment variable
spring.cloud.dataflow.applicationProperties.stream.spring.rabbitmq.virtual-host=vhostA
As Dataflow uses queues as communication between the different stages of the Stream, if I don't specify this setting, then the RabbitMQ source and sink communication queues are created on the VHosts specified in their respective configs, however, no communication queue is created for the CustomProcessor.
Therefore, data gets stuck in the Source communication queue.
Also, I know that feasibly Shovels can get around this, but it feels like if the option of outputting to a different VHost is available to you in the RabbitMQ sink then it should work.
All being said, it may well be a bug with the Rabbit Stream Source/Sink apps.
UPDATE:
Looking at the stream definition (once the stream has been deployed), the spring.rabbitmq.virtual-host switch is defined twice. Once with the vHostB which is defined against the sink and then later with the vHostA which is the Spring property.
Removing the virtual-host application property and explicitly setting spring.rabbitmq.virtual-host, host, username and password on processor (including the RabbitMQ source and sinks), and it makes it's way to the processor communication queue, but as the RabbitMQ sink is set to a different VHost, it doesn't seem to get any further.
In this scenario, the communication queues which are created between the various stages of the stream are created on the same VHost which the source is reading from (vHostA). As we can only give the spring.rabbitmq.virtual-host setting to the apps once, the sink is doesn't know to look at the communication queues to pass that data onto it's destination exchange on vHost B.
It's almost as if there are missing switches on the Source and Sink RabbitMQs, or am I missing an overall setting which defines the VHost of where the communication queues should reside, without overriding the source and destination VHosts on the RabbitMQ source and sinks?
Please note that SCDF doesn't directly communicate with RabbitMQ. SCDF attempts to automate the creation of Spring Cloud Stream "env-vars" based on well-defined naming conventions derived from the stream+app names.
It is the Apps themselves that connect to publish/subscribe to RabbitMQ exchanges independently. As far as the right "env-vars" land as properties to the apps when they bootstrap, they should be able to connect as per the configuration.
You pointed out spring.cloud.dataflow.applicationProperties.stream.spring.rabbitmq.virtual-host=vhostA property. That, if supplied, SCDF attempts to propagate that as the virtual-host to all the stream applications that it deploys to the targeted platform.
In your case, it sounds like you'd want to override the virtual-host at the source and the sink level independently, which you can accomplish as specific properties to these Apps in the stream definition, either supplied as in-line or as deployment properties.
Once when you do, you can confirm whether or not they are taking into account by accessing the App's actuator endpoint. Specifically, /configprops would be useful.
Related
Is such a situation even possible ? :
There is an application "XYZ" (in which there is no Kafka) that exposes a REST api. It is a SpringBoot application with which Angular application communicates.
A new application (SpringBoot) is created which wants to use Kafka and needs to fetch data from "XYZ" application. And it wants to do this using Kafka.
The "XYZ" application has an example endpoint [GET] api/message/all which displays all messages.
Is there a way to "connect" Kafka directly to this endpoint and read data from it ? In short, the idea is for Kafka to consume data directly from the EP. Communication between two microservices, where one microservice does not have a kafka.
What suggestions do you have for solving this situation. Because I guess this option is not possible. Is it necessary to add a publisher in application XYZ which will send data to the queue and only then will they be available for consumption by a new application ??
Getting them via the REST-Interface might not be a very good idea.
Simply put, in the messaging world, message delivery guarantees are a big topic and the standard ways to solve that with Kafka are usually
Producing messages from your service using the Producer-API to a Kafka topic.
Using Kafka-Connect to read from an outbox-table.
Since you most likely have a database already attached to your API-Service, there might arise the problem of dual writes if you choose to produce the messages directly to a topic. What this means, is that writes to a database might fail while it might be successfully written to Kafka/vice-versa. So you can end up with inconsistent states. Depending on your use case this might be a problem or not.
Nevertheless, to overcome that, the outbox pattern can come in handy.
Via the outbox pattern, you'd basically write your messages to a table, a so-called outbox-table, and then you'd use Kafka-Connect to poll this table of the database. Kafka Connect is basically a cluster of workers that consume this database table and forward the entries of the table to a Kafka topic. You might want to look at confluent cloud, they offer a fully managed Kafka-Connect service. Like this you don't have to manage the cluster of workers yourself. Once you have the messages in a Kafka topic, you can consume them with the standard Kafka Consumer-API/ Stream-API.
What you're looking for is a Source-Connector.
A source connector for a specific database. E.g. MongoDB
E.g. https://www.confluent.io/hub/mongodb/kafka-connect-mongodb
For now, most source-connectors produce in an at-least-once fashion. This means that the topic you configure the connector to write to might contain a message twice. So make sure that if you need them to be consumed exactly once, you think about deduplicating these messages.
First, let me explain what I have tried with classic ActiveMQ which worked perfectly for my requirements:
I have few Queues with naming templates and each queue represents a Tenant(customer). The naming pattern is like queue.<<tenant-id>>.event which, here, I used test1 to test5 for simplicity.
Multiple producers are putting messages on these different queues based on which tenants are requesting it.
My ActiveMQ queues look like this in the web console:
Queues in the classic ActiveMQ
Then I started the Spring JMS listener with the wildcard to be able to read from all of these queues with one listener. the code is like this:
#JmsListener(destination = "queue.>")
public void receiveMessage(Event event) {
//Process the event message
}
What I have observed which I cannot configure Artemis to do the same is:
Listening on ActiveMQ Queues with wildcard did not create a new queue(listener Queue)
Consuming the messages with a wildcard listener would actually reduce the number of pending messages in the actual queues.
The wildcard listener would actually quite fairly read messages from all queues. It still does respect the FIFO on each queue but would not respect it cross queues. For example, when I put 100 messages in the queue.test1.event and only then add 100 messages in queue.test2.event, then if I start the wildcard listener, it starts to read messages fairly from both queues, although all the messages in queue.test2.event queue are basically added after the 100 messages in queue.test1.event.
I need features #2 and #3. The first one is just the observation which I think is the root cause of my problem in Artemis.
Now, what happened when I moved to Artemis is:
The wildcard pattern is a little different but I did the same scenario. The listener looks like:
#JmsListener(destination = "queue.#")
public void receiveMessage(Event event) {
//Process the event message
}
As you see the wildcard template is changed to queue.# to be able to read from all those queues.
My Artemis queues look like this in the web console:
Queues in the Artemis,
My observation on the web console shows I cannot achieve the same here:
As you see in the picture, the number of message count in the original queues, in which I put the messages, are still kept, despite only 44 of them are remained for processing(looks at the message count of queue.#) and the rest has been already read by the wildcard listener.
This can cause a storage issue for me since all of my messages are persisted and I can't play with the message expiry too.
As you see in the picture, the listener created another Queue named queue.# which seems Artemis is internally copying the messages from the other ones into it.
Not a problem and just an observation.
It respects the FIFO across all queues, which I guess is because Artemis is doing the copy from the original queues to the wildcard one.
This creates a huge problem for me. Although I still want it to respect the FIFO inside each queue, I also want it to start consuming messages from other queues. Because, if one customer is processing huge tasks, it should not block others to continue theirs.
PS1: I restrict the listeners in both tests to just consume one message at a time to be able to test it properly.
PS2: If you wonder why I don't use classic ActiveMQ if it does exactly what I need, The answer is: Apache will make Artemis its Major version(once it reached a certain level of maturity) in the future and I would like to be aligned with the roadmap.Quote from its website:
Once Artemis reaches a sufficient level of feature parity with the "Classic" code-base it will become the next major version of ActiveMQ
PS3: I am using spring-boot and its starter packages to connect and put/consume messages.
PS4: I am using the default configuration for both solutions and installations.
Simply put, ActiveMQ Artemis doesn't support wildcard consumers. It only supports wildcard addresses which have similar but different semantics (as explained in the answer on this question of yours).
Feel free to open an issue to request this feature be implemented.
We have a requirement to copy messages from one ActiveMQ broker to another. Here the message has to just copy and the message should exist in both broker.
I can think of a custom application that subscribes to a certain destination and read that message and re-post the messages to the destination in multiple brokers.
I do not have access to make changes in the Broker so I couldn't think of Network of Brokers option.
Is there any best practice or tools available to copy A-MQ messages from one broker to another?
Without having access to the target broker, as far as I know and I have read, I believe there is not shortcut to avoid the custom application that re-post those messages.
However, depending on the messages you want to re-post, there might be some functionalities offered by ActiveMQ that could facilitate your implementation (but they would not be for free, regarding the computational costs).
For example, in the case you want to copy ALL the messages sent through that broker to the other, then you might consider using Mirrored Queues, with a specific prefix (e.g. "copy"), that would allow you to just have a single consumer using a wildcard after that prefix (e.g. "copy.>"). That consumer would get ALL the messages sent to the broker, and it would simplify your implementation since you would just have to care about that single consumer and re-post from it. However this has costs, since as it is described in the documentation, enabling the mirrored queues will make a duplicate of each queue/topic in the system, and will post each message twice. You need to consider if this is an important inconvenient in your case, depending on the amount of messages and the available memory that your broker disposes.
In case you just wanted to copy SOME of the messages and not all, then I believe the most elegant way to handle it is by creating an abstraction of your Consumer class (or specific implementation), and use that special implementation for those queues you want to re-post. That class would be responsible of re-posting the messages to the other broker, in a way that would be transparent from the other Consumer class when using it.
I have talked above about consumers, but the same concept could apply to topics and subscribers. Hope these ideas help :)
I have to run two instances of the same application that read messages from 'queue-1' and write them back to another queue 'queue-2'.
I need my messages inside the two queues to be ordered by specific property (sequence number) which is initially added to every message by producer. As per documentation, inside queue-1 the order of messages will be preserved as messages are sent by a single producer. But because of having multiple consumers that read, process and send the processed messages to queue-2, the order of messages inside queue-2 might be lost.
So my task is to make sure that messages are delivered to queue-2 in the same order as they were read from queue-1. I have implemented re-sequencer pattern from Apache camel to re-order messages inside queue-2. The re-sequencer works fine but results to data transfer overhead as the camel routes run locally.
Thinking about doing it in a better way, I have three questions:
Does artemis inherently supports re-ordering of messages inside a
queue using a property such as sequence number.
Is it possible to run the routes inside the server? If yes, can you
give an example or give a link to the documentation?
Some artemis features such as divert (split) requires modifying
broker configuration (broker.xml file), is there a way to do them
programmatically and dynamically so I can decide when to start
diverting message? I know this can be accomplished by using camel,
but I want everything to be running in the server.
Does artemis inherently supports re-ordering of messages inside a queue using a property such as sequence number.
No. Camel is really the best solution here in my opinion.
Is it possible to run the routes inside the server? If yes, can you give an example or give a link to the documentation?
You should be able to do the same kind of thing in Artemis as in ActiveMQ 5.x using a web application with a Camel context. The 5.x doc is here.
Some artemis features such as divert (split) requires modifying broker configuration (broker.xml file), is there a way to do them programatically and dynamically so I can decide when to start diverting message?
You can use the Artemis management methods to create, modify, and delete diverts programmatically (or administratively) at runtime. However, these modifications will be volatile (i.e. they won't survive a broker restart).
I want to know how do I retrieve the messages already present on the Solace Queue. I am able to send and receive the messages I created from my machine but can't receive any messages that are already present in the queue. I want to retrieve the messages and store it in a text file.
I am sending my messages by integrating Solace APIs in Gradle and writing code in Java. Can anyone guide me regarding the same?
There's an exact tutorial for this.
If you had downloaded the Solace Java JAR via the Maven links, you might have missed the entire suite, which contains all the dependent JARs distributed by Solace, API reference docs, as well as a bunch of samples. The latter is in addition to what you may find on http://dev.solace.com/get-started/java-tutorials/. Get the entire ZIP file, as well as the Release Notes, from http://dev.solace.com/downloads/.
There are multiple possibilities why you cannot receive messages from a queue:
Queue name is misspelt.
Queue permissions are wrong.
Queue is shut down on the egress.
Message spool is not active on the router.
Client profile is set not to receive Guaranteed Messages.
Number of egress flows has exceeded the router / message-vpn limit.
Bind count on the queue has exceeded.
The egress flow is not active.
Client is not connected to the router.
...
Examining the error / exception will give you information why you cannot receive messages.