I have Spring XD Rabbit source in my stream definition but it will fail when the queue it's listening is not yet created. When I am using Spring Integration Boot I am able to do this in my JavaConfig.
My stream definition:
stream create --name HOLA_Q --definition "rabbit --requeue=false | my-own-processor | null" --deploy
I have tried using rabbit admin in my spring-module.xml inside my-own-processor but doesn't work or get triggered during stream deployment.
Or is this rabbit queue auto creation feature not yet supported?
Many Thanks
Auto creation of the queue by the source is not currently supported.
Per the documentation:
The queue(s) must exist before the stream is deployed. We do not create the queue(s) automatically. However, you can easily create a Queue using the RabbitMQ web UI. Then, using that same UI, you can navigate to the "rabbittest" Queue and publish test messages to it.
You could create a custom rabbit source that adds the queue (and an optional exchange and binding), as well as a RabbitAdmin bean to the application context and the queue/exchange/binding will be declared.
Related
I have an application which listen to an activeMQ queue and start a Batch Job when receiving a message.
I'd like to use Spring Cloud Dataflow to provide an UI but I can't find informations on how to configure it.
Since it uses Spring Boot I should be able to replicate how my application currently works (use a REST API to make it listen to activeMQ and start job when receiving message), but I can't find anything on how to make it start the batch in Cloud Dataflow.
You have a few options here.
Option 1: Launch your application as-is and manually send message to launch task.
Any arbitrary Spring Boot application can be launched from Dataflow (simply register it as type = "App").
Taken from https://github.com/spring-cloud/spring-cloud-dataflow/blob/main/spring-cloud-dataflow-docs/src/main/asciidoc/streams.adoc#register-a-stream-application:
Registering an application by using --type app is the same as registering a source, processor or sink. Applications of the type app can be used only in the Stream Application DSL (which uses double pipes || instead of single pipes | in the DSL) and instructs Data Flow not to configure the Spring Cloud Stream binding properties of the application. The application that is registered using --type app does not have to be a Spring Cloud Stream application. It can be any Spring Boot application. See the Stream Application DSL introduction for more about using this application type.
You would have to send the task launch in your code. You can use the Dataflow REST client to do this. You can get an idea of how to do that by looking at https://github.com/spring-cloud/spring-cloud-dataflow/tree/main/spring-cloud-dataflow-tasklauncher/spring-cloud-dataflow-tasklauncher-sink.
Option 2: Use pre-built stream applications to model the same flow as your application.
The app you describe can be logically modeled as a Spring Cloud Stream application.
There is a JMS source (provides messages to signal the need to kickoff task/batch job)
There is a TaskLauncher sink (receives messages and kicks off the task/batch job)
This app can actually be constructed w/ little effort by using the pre-packaged applications to model this flow.
JMS Source
Dataflow Tasklauncher Sink
If you have to register these applications in the UI - they can be found at:
maven://org.springframework.cloud.stream.app:jms-source-kafka:3.1.1
maven://org.springframework.cloud:spring-cloud-dataflow-tasklauncher-sink-kafka:2.9.2
Stream definition:
jms-source | dataflow-tasklauncher-sink
The README(s) on the above source/sinks give details about the configuration options.
Option 3: Custom Spring Cloud Stream app w/ function composition
The previous option would create 2 separate apps. However, if you want to keep the logic in a single app then you can look into creating a custom Spring Cloud Stream app that uses function composition and leverage the pre-built reusable Java functions that the apps in option 2 are built upon.
JMS Supplier
TaskLauncherFunction
Is dynamic routing is same as dynamic destination binding in spring cloud stream ?
Dynamic routing as per rabbit all producer published to same queue, producer configured with routingKeyExpression and consumer listener configured with bindingRoutingKey and exchange routes the message to matched bindingKey.
does this can be accomplished using stream bridge or BinderAwareChannelResolver? If not how does spring manage with this in case someone wants to move from rabbit to any other broker.
Yes this can be accomplished with StreamBridge, RoutingFunction, spring.cloud.stream.sendto.destination etc., depending on your use case which is not clear from your post, hence I am giving you everything.
You can find more information here and here for StreamBridge.
The BinderAwareChannelResolver is deprecated in favor of StreamBridge
I am currently working on an Integration application that uses Camel with Spring Boot. There is a camel route in integration application that receive messages from source Artemis broker that is transformed and sent to another Artemis broker.
The camel route looks like this:
from(sourceQueue).process(transformProcessor).to(destinationQueue)
When the camel route starts, it recreates the queue names mentioned in the from and to and the previous messages are lost. We do not expect this to happen.
One way I found to do this is in the Artemis ActiveMQ broker.xml, disable the queue and topic auto creation and create the queue(s) using Artemis API.
My question is, can we configure camel JMS / AMQP component to create the queue only if it is not present and if present use the existing ones?
By default Camel will use DynamicDestinationResolver. You can create your own custom DestinationResolver and plug it in your endpoint (or into your component)
.to("jms:queue:myQueue?destinationResolver=MyCustomDestinationResolver");
You can also use JndiDestinationResolver, which by default does not fallback into creating a dynamic destination.
I don't know Artemis but it sounds weird for a broker to delete a queue with its messages. At least its "brother" ActiveMQ has by default the behavior you expect: queues are automatically created if they do not exist, but they just stay if they already exist.
Are you sure the queues are recreated on route start? Are these queues persistent? Could it be that a consumer just drains the queue? I also found a queue attribute of Artemis named auto-delete-queues that would delete the queue if it was drained by a consumer.
auto-delete-queues Whether or not to the broker should automatically delete auto-created JMS queues when they have both 0 consumers and 0 messages.
In my POC, I am using Spring Cloud Config and Spring Stream Rabbit. I want to dynamically change number of listeners (concurrency). Is it possible to do that? I want to do following:
1) If there are too many messages in queue, i want to increase concurrency level.
2) In scenario where my downstream system is not available, I want to stop processing messages from queue (in short concurrency level 0).
How i can achieve this?
Thanks for help.
The listener container running in the binder supports such changes (although you can't go down to 0, but the container can be stop() ped).
However, spring-cloud-stream provides no mechanism for you to get a reference to the listener container.
You might want to consider using a #RabbitListener from Spring AMQP instead - it will give you complete control over the listener container.
I have set up a spring xd stream with DLQ and I have made cyclic configuration to
return from DLQ to main Queue.
xdbus.s3Test.0-->DLQ-->xdbus.s3Test.0
stream create s3Test --definition "aws-s3-source | log"
stream deploy S3test --properties module.*.consumer.autoBindDLQ=true
To make it a circle I had to change the configuration of DLQ adding this manually from rabbit mq ADMIN UI .
x-message-ttl: 30000
x-dead-letter-exchange: Default(Empty)
Is there any way in spring xd I can configure DLQ properties?Because DL queue will be generated by XD at run time and Ideally I cannot tamper this in production.Can i set some properties to set the above properties for DLQ?
You can't set the DLQ properties through XD, but you can create a policy in RabbitMQ that applies to DLQs created by XD:
$ rabbitmqctl set_policy XDDLQTTL "xdbus\..*\.dlq" '{"dead-letter-exchange":"", "message-ttl":30000}' --apply-to queues
This applies your required properties to all queues starting with xdbus. and ending with .dlq.
I just tested it with this and it works nicely...
xd:>stream create ticktock --definition "time --fixedDelay=60 | transform --expression=1/0 | log"
Created new stream 'ticktock'
xd:>stream deploy ticktock --properties module.*.consumer.autoBindDLQ=true
Deployed stream 'ticktock'
And you can see the message cycling (because of the divide by zero).
One caveat - this will keep trying forever; you would need some code in the module to look at the x-death header if you wish to abort after some number of retries.