I have set up a spring xd stream with DLQ and I have made cyclic configuration to
return from DLQ to main Queue.
xdbus.s3Test.0-->DLQ-->xdbus.s3Test.0
stream create s3Test --definition "aws-s3-source | log"
stream deploy S3test --properties module.*.consumer.autoBindDLQ=true
To make it a circle I had to change the configuration of DLQ adding this manually from rabbit mq ADMIN UI .
x-message-ttl: 30000
x-dead-letter-exchange: Default(Empty)
Is there any way in spring xd I can configure DLQ properties?Because DL queue will be generated by XD at run time and Ideally I cannot tamper this in production.Can i set some properties to set the above properties for DLQ?
You can't set the DLQ properties through XD, but you can create a policy in RabbitMQ that applies to DLQs created by XD:
$ rabbitmqctl set_policy XDDLQTTL "xdbus\..*\.dlq" '{"dead-letter-exchange":"", "message-ttl":30000}' --apply-to queues
This applies your required properties to all queues starting with xdbus. and ending with .dlq.
I just tested it with this and it works nicely...
xd:>stream create ticktock --definition "time --fixedDelay=60 | transform --expression=1/0 | log"
Created new stream 'ticktock'
xd:>stream deploy ticktock --properties module.*.consumer.autoBindDLQ=true
Deployed stream 'ticktock'
And you can see the message cycling (because of the divide by zero).
One caveat - this will keep trying forever; you would need some code in the module to look at the x-death header if you wish to abort after some number of retries.
Related
I am currently working on an Integration application that uses Camel with Spring Boot. There is a camel route in integration application that receive messages from source Artemis broker that is transformed and sent to another Artemis broker.
The camel route looks like this:
from(sourceQueue).process(transformProcessor).to(destinationQueue)
When the camel route starts, it recreates the queue names mentioned in the from and to and the previous messages are lost. We do not expect this to happen.
One way I found to do this is in the Artemis ActiveMQ broker.xml, disable the queue and topic auto creation and create the queue(s) using Artemis API.
My question is, can we configure camel JMS / AMQP component to create the queue only if it is not present and if present use the existing ones?
By default Camel will use DynamicDestinationResolver. You can create your own custom DestinationResolver and plug it in your endpoint (or into your component)
.to("jms:queue:myQueue?destinationResolver=MyCustomDestinationResolver");
You can also use JndiDestinationResolver, which by default does not fallback into creating a dynamic destination.
I don't know Artemis but it sounds weird for a broker to delete a queue with its messages. At least its "brother" ActiveMQ has by default the behavior you expect: queues are automatically created if they do not exist, but they just stay if they already exist.
Are you sure the queues are recreated on route start? Are these queues persistent? Could it be that a consumer just drains the queue? I also found a queue attribute of Artemis named auto-delete-queues that would delete the queue if it was drained by a consumer.
auto-delete-queues Whether or not to the broker should automatically delete auto-created JMS queues when they have both 0 consumers and 0 messages.
In my POC, I am using Spring Cloud Config and Spring Stream Rabbit. I want to dynamically change number of listeners (concurrency). Is it possible to do that? I want to do following:
1) If there are too many messages in queue, i want to increase concurrency level.
2) In scenario where my downstream system is not available, I want to stop processing messages from queue (in short concurrency level 0).
How i can achieve this?
Thanks for help.
The listener container running in the binder supports such changes (although you can't go down to 0, but the container can be stop() ped).
However, spring-cloud-stream provides no mechanism for you to get a reference to the listener container.
You might want to consider using a #RabbitListener from Spring AMQP instead - it will give you complete control over the listener container.
I am migrating from Spring XD to Spring Cloud Data Flow. When I am looking for module list I realised that some of the sources are not listed in Spring Cloud Flow - One of them is KAFKA source.
My question is why KAFKA source is removed from standard sources list in spring cloud data flow ?
When I am looking for module list I realised that some of the sources are not listed in Spring Cloud Flow
Majority of the applications are ported over and the remaining are incrementally prioritized - you can keep track of the remaining subset in the backlog.
My question is why KAFKA source is removed from standard sources list in spring cloud data flow ?
Kafka is not removed and in fact, we are highly opinionated about Kafka in the context of streaming use-cases so much so that it is baked into the DSL directly. More details here.
For instance,
(i) if you've to consume from a Kafka topic (as a source), your stream definition would be:
stream create --definition ":someAwesomeTopic > log" --name subscribe_to_broker --deploy
(ii) if you've to write to a Kafka topic (as a sink), your stream definition would be:
stream create --definition "http --server.port=9001 > :someAwesomeTopic" --name publish_to_broker --deploy
(where *someAwesomeTopic* is the named destination, a topic name)
I have Spring XD Rabbit source in my stream definition but it will fail when the queue it's listening is not yet created. When I am using Spring Integration Boot I am able to do this in my JavaConfig.
My stream definition:
stream create --name HOLA_Q --definition "rabbit --requeue=false | my-own-processor | null" --deploy
I have tried using rabbit admin in my spring-module.xml inside my-own-processor but doesn't work or get triggered during stream deployment.
Or is this rabbit queue auto creation feature not yet supported?
Many Thanks
Auto creation of the queue by the source is not currently supported.
Per the documentation:
The queue(s) must exist before the stream is deployed. We do not create the queue(s) automatically. However, you can easily create a Queue using the RabbitMQ web UI. Then, using that same UI, you can navigate to the "rabbittest" Queue and publish test messages to it.
You could create a custom rabbit source that adds the queue (and an optional exchange and binding), as well as a RabbitAdmin bean to the application context and the queue/exchange/binding will be declared.
Does Spring XD support automatic failover when running in distributed mode? Right now, if I create a simple stream on a setup with two containers, the source gets deployed to one container and the sink get deployed to the other container. If I shut down the second container, the stream is listed as deployed. If I undeploy and deploy the stream again, things start working as expected with both source and sink deployed to one container. I would expect this to happen automatically. I am using version 1.0.0.M3 with the following example stream:
stream create --definition "time | log" --name ticktock
It it does not support automatic failover. Better mgmt of streams, e.g. deployment manifests/monitoring, is starting to be developed now, stay tuned.
Cheers,
Mark