Using AggregateApplicationBuilder with a local binder - spring

I'm trying to aggregate different Sink and Source spring boot applications using the AggregateApplicationBuilder as described here: http://docs.spring.io/spring-cloud-stream/docs/current-SNAPSHOT/reference/htmlsingle/#_aggregation
Since I expect in process communication, I don't want to setup kafka or rabbitmq binder. How to configure a local one? I found that a spring-cloud-stream-binder-local exists but it's in M2 since a long time and is not embedded with a release train.
How I can use the AggregateApplicationBuilder with no external system dependency?
Thanks

With AggregateApplicationBuilder you don't have to configure the binder for the in-process communication of the directly bound channels within the aggregated application. The binder is required only if you need the aggregate application itself consumes messages from broker or produces messages to broker. If the aggregated application itself is self-contained, then there is no need for the binder at all.

Related

Kafka as event source when using Axon

I'm studying Axon framework to try to use it in one of my microservices. I use Spring boot as my microservice and I want to use Axon framewrok for DDD and event sourcing. The thing is we already use Kafka in production and I'm not sure I can add another service (Axon serve) since it might consume resources I don't have (does it consume a lot of resources by the way?)
So I was thinking to use Kafka as event source and event routing with Axon.
Is it possible?
You can use Kafka as event bus using the Kafka extension for Axon. You can't use Kafka as event store however. So you still need Axon Server or a relational database for the event store to use Axon Framework.
You could also combine those, e.g. have some events via Kafka, and some via Axon Server.

Is it possible to get exactly once processing with Spring Cloud Stream?

Currently I'm using SCS with almost default configuration for sending and receiving message between microservices.
Somehow I've read this
https://www.confluent.io/blog/enabling-exactly-kafka-streams
and wonder that it is gonna works or not if we just put the property called "processing.guarantee" with value "exactly-once" there through properties in Spring boot application ?
In the context of your question you should look at Spring Cloud Stream as just a delegate between target system (e.g., Kafka) and your code. The binders that enable such delegation are usually implemented in such way that they propagate whatever functionality supported by the target system.

spring cloud stream - Can a MicroService a source and a sink at the same time

I would have one just simple question - Is it possible that a MicroService (Spring Boot application) is a Source and a Sink at the same time - so is it possible that in a Microservice one class is annotated with
#InboundChannelAdapter
and another class of the microService is annotated with
#StreamListener
Yes, it is possible. You just should properly configure binding for them.
You even can use different binders: https://docs.spring.io/spring-cloud-stream/docs/Chelsea.SR2/reference/htmlsingle/index.html#multiple-systems!
Consider this source and sink capabilities of your Microservice as just ports for the application. So, you may have several of them and each may perform independent logic.
Only the problem that you can't use this custom Spring Cloud Stream application in the Spring Cloud Data Flow. There is necessary to follow particular naming and structure convention.

Can we change rabbitmq properties spring config and stream rabbit

In my POC, I am using Spring Cloud Config and Spring Stream Rabbit. I want to dynamically change number of listeners (concurrency). Is it possible to do that? I want to do following:
1) If there are too many messages in queue, i want to increase concurrency level.
2) In scenario where my downstream system is not available, I want to stop processing messages from queue (in short concurrency level 0).
How i can achieve this?
Thanks for help.
The listener container running in the binder supports such changes (although you can't go down to 0, but the container can be stop() ped).
However, spring-cloud-stream provides no mechanism for you to get a reference to the listener container.
You might want to consider using a #RabbitListener from Spring AMQP instead - it will give you complete control over the listener container.

Is it possible for Spring-XD to listen to more than one JMS broker at a time?

I've managed to get Spring Xd working for a scenario where I have data coming in from one JMS broker.
I potentially am facing a scenario where data ingestion could happen from different sources thereby needing me to connect to different brokers.
Based on my current understanding, I'm not quite sure how to do this as there exists a JMS config file which allows you to setup only one broker.
Is there a workaround to this?
At the moment, you would have to create a separate jms-[provider]-infrastructure-context.xml for each broker (in modules/common), say call the provider activemq2.
Then use --provider=activemq2 in the module definition.
(I recently used this technique to test sonicmq and hornetq providers).

Resources