I am trying out spring data flow to see if it fits my needs. I am wondering how can I bind my spring cloud stream app to multiple named destinations (in my case RabbitMQ exchanges). From what I have read, you can bind multiple apps to one named destination (fan in/out) but not one app to multiple destinations...
Any ideas?
The spring.cloud.stream.bindings.input/output.destination property allows you to specify the name(s) of the destinations. NOTE: that "input" or "output" in the above corresponds to the channel name of your message handler (e.g., #StreamListener(Sink.INPUT))
Is that what you're looking for?
Related
This question is related to my earlier question.
I am using Mule 4.4 community edition, and I was looking through the various components available for JMS in Mule. I'm confused about JMS On new Message and JMS Listener. All of the documentation for listener seems to be talking about On New Message however the code snippets in this link shows a listener:
<jms:listener config-ref="config" destination="#[vars.destination]"/>
So it looks like they are both one and the same. If so, then I am confused as to why they show up separately in Mule palette as individual components.
When I dragged and dropped both of these components into their respective flows the underlying XML code still is about the listener.
They are the same. The display name of jms:listener is "On New Message." That's all there is to this. It is to maintain a consistency across different modules. If you check other modules like database, emails, SFTP, or any other module the XML DSL always has "listener" as its DSL element. The display name is different as they should be more descriptive then just saying "listener." For example, if you just say listener in SFTP one might think that it is also listening to deletion of files too, but it only listens to new or updated files and therefore it is more appropriate to name it "On New or Updated File."
Is such a situation even possible ? :
There is an application "XYZ" (in which there is no Kafka) that exposes a REST api. It is a SpringBoot application with which Angular application communicates.
A new application (SpringBoot) is created which wants to use Kafka and needs to fetch data from "XYZ" application. And it wants to do this using Kafka.
The "XYZ" application has an example endpoint [GET] api/message/all which displays all messages.
Is there a way to "connect" Kafka directly to this endpoint and read data from it ? In short, the idea is for Kafka to consume data directly from the EP. Communication between two microservices, where one microservice does not have a kafka.
What suggestions do you have for solving this situation. Because I guess this option is not possible. Is it necessary to add a publisher in application XYZ which will send data to the queue and only then will they be available for consumption by a new application ??
Getting them via the REST-Interface might not be a very good idea.
Simply put, in the messaging world, message delivery guarantees are a big topic and the standard ways to solve that with Kafka are usually
Producing messages from your service using the Producer-API to a Kafka topic.
Using Kafka-Connect to read from an outbox-table.
Since you most likely have a database already attached to your API-Service, there might arise the problem of dual writes if you choose to produce the messages directly to a topic. What this means, is that writes to a database might fail while it might be successfully written to Kafka/vice-versa. So you can end up with inconsistent states. Depending on your use case this might be a problem or not.
Nevertheless, to overcome that, the outbox pattern can come in handy.
Via the outbox pattern, you'd basically write your messages to a table, a so-called outbox-table, and then you'd use Kafka-Connect to poll this table of the database. Kafka Connect is basically a cluster of workers that consume this database table and forward the entries of the table to a Kafka topic. You might want to look at confluent cloud, they offer a fully managed Kafka-Connect service. Like this you don't have to manage the cluster of workers yourself. Once you have the messages in a Kafka topic, you can consume them with the standard Kafka Consumer-API/ Stream-API.
What you're looking for is a Source-Connector.
A source connector for a specific database. E.g. MongoDB
E.g. https://www.confluent.io/hub/mongodb/kafka-connect-mongodb
For now, most source-connectors produce in an at-least-once fashion. This means that the topic you configure the connector to write to might contain a message twice. So make sure that if you need them to be consumed exactly once, you think about deduplicating these messages.
TLDR: Can't seem to pass messages from one RabbitMQ VHost to another RabbitMQ VHost.
I'm having an issue with Spring Cloud Dataflow where it appears that despite specifying different RabbitMQ VHosts for source and sink, they don't ever get to the destination Exchange.
My dataflow stream looks like this: RabbitMQ Source | CustomProcessor | RabbitMQ Sink
RabbitMQ Source reads from a queue on vHostA and RabbitMQ Sink should output to ExchangeBlah on vHostB.
However, no messages end up on ExchangeBlah on vHostB, and I get errors in the RabbitMQ Sink log saying:
Channel shutdown: channel error; protocol method: 'method(reply-code=404, reply-text=NOT_FOUND - no exchange 'ExchangeBlah' in vhost 'vHostA', class-id=60, method-id=40)
I've got a feeling that this might be related to the Spring environment variable
spring.cloud.dataflow.applicationProperties.stream.spring.rabbitmq.virtual-host=vhostA
As Dataflow uses queues as communication between the different stages of the Stream, if I don't specify this setting, then the RabbitMQ source and sink communication queues are created on the VHosts specified in their respective configs, however, no communication queue is created for the CustomProcessor.
Therefore, data gets stuck in the Source communication queue.
Also, I know that feasibly Shovels can get around this, but it feels like if the option of outputting to a different VHost is available to you in the RabbitMQ sink then it should work.
All being said, it may well be a bug with the Rabbit Stream Source/Sink apps.
UPDATE:
Looking at the stream definition (once the stream has been deployed), the spring.rabbitmq.virtual-host switch is defined twice. Once with the vHostB which is defined against the sink and then later with the vHostA which is the Spring property.
Removing the virtual-host application property and explicitly setting spring.rabbitmq.virtual-host, host, username and password on processor (including the RabbitMQ source and sinks), and it makes it's way to the processor communication queue, but as the RabbitMQ sink is set to a different VHost, it doesn't seem to get any further.
In this scenario, the communication queues which are created between the various stages of the stream are created on the same VHost which the source is reading from (vHostA). As we can only give the spring.rabbitmq.virtual-host setting to the apps once, the sink is doesn't know to look at the communication queues to pass that data onto it's destination exchange on vHost B.
It's almost as if there are missing switches on the Source and Sink RabbitMQs, or am I missing an overall setting which defines the VHost of where the communication queues should reside, without overriding the source and destination VHosts on the RabbitMQ source and sinks?
Please note that SCDF doesn't directly communicate with RabbitMQ. SCDF attempts to automate the creation of Spring Cloud Stream "env-vars" based on well-defined naming conventions derived from the stream+app names.
It is the Apps themselves that connect to publish/subscribe to RabbitMQ exchanges independently. As far as the right "env-vars" land as properties to the apps when they bootstrap, they should be able to connect as per the configuration.
You pointed out spring.cloud.dataflow.applicationProperties.stream.spring.rabbitmq.virtual-host=vhostA property. That, if supplied, SCDF attempts to propagate that as the virtual-host to all the stream applications that it deploys to the targeted platform.
In your case, it sounds like you'd want to override the virtual-host at the source and the sink level independently, which you can accomplish as specific properties to these Apps in the stream definition, either supplied as in-line or as deployment properties.
Once when you do, you can confirm whether or not they are taking into account by accessing the App's actuator endpoint. Specifically, /configprops would be useful.
I have to run two instances of the same application that read messages from 'queue-1' and write them back to another queue 'queue-2'.
I need my messages inside the two queues to be ordered by specific property (sequence number) which is initially added to every message by producer. As per documentation, inside queue-1 the order of messages will be preserved as messages are sent by a single producer. But because of having multiple consumers that read, process and send the processed messages to queue-2, the order of messages inside queue-2 might be lost.
So my task is to make sure that messages are delivered to queue-2 in the same order as they were read from queue-1. I have implemented re-sequencer pattern from Apache camel to re-order messages inside queue-2. The re-sequencer works fine but results to data transfer overhead as the camel routes run locally.
Thinking about doing it in a better way, I have three questions:
Does artemis inherently supports re-ordering of messages inside a
queue using a property such as sequence number.
Is it possible to run the routes inside the server? If yes, can you
give an example or give a link to the documentation?
Some artemis features such as divert (split) requires modifying
broker configuration (broker.xml file), is there a way to do them
programmatically and dynamically so I can decide when to start
diverting message? I know this can be accomplished by using camel,
but I want everything to be running in the server.
Does artemis inherently supports re-ordering of messages inside a queue using a property such as sequence number.
No. Camel is really the best solution here in my opinion.
Is it possible to run the routes inside the server? If yes, can you give an example or give a link to the documentation?
You should be able to do the same kind of thing in Artemis as in ActiveMQ 5.x using a web application with a Camel context. The 5.x doc is here.
Some artemis features such as divert (split) requires modifying broker configuration (broker.xml file), is there a way to do them programatically and dynamically so I can decide when to start diverting message?
You can use the Artemis management methods to create, modify, and delete diverts programmatically (or administratively) at runtime. However, these modifications will be volatile (i.e. they won't survive a broker restart).
I'm trying to expose a process definition in TIBCO BW Designer 5.7 as a web service, but I've run into some snags. For some reason, I cannot start the Generate Web Service Wizard, because my process does not appear in the "Add More Processes to interface" list.
I've been searching online but to not much avail. What I've gathered is that I need to reference external schemas (using XML Element Reference) in my input (Start) and output (End), which I have done so. So what could be possibly wrong?
Do I need to include any Process Variables or Partners under the Process Definition?
I'm very new to Designer so would appreciate some help here!
To expose a BusinessWorks process as a web service you need to use a WSDL message as input and output (and optionally error output). If you already have a process that is used by other processes and do not want to change input/output schema you could create another process that essentially wraps your initial process but expose input/output as WSDL messages. My suggestion would be to follow these approximate steps
Create a XML schema containing the input and output formats
Create a WSDL resource
Add two Message resources (input/output), reference the above XML schema
Add a PortType resource
Add an Operation resource referencing the two Message resources as input and output
Set input/output of the process to expose to the WSDL messages defined above
Create a Service resource
Add the WSDL operation to the Service interface
Set the implementation of the operation to your process definition
Add a SOAP endpoint with a HTTP transport
Add the Service resource to your Process Archive
For more details on the parameters that can be used, see the BusinessWorks Palette Reference documentation.
the most common mistake in this case is that you don't use a XML schema for the input and output, make sure that you have one for every process you have in your project and then you can continue with your web service generation.
Kind Regards