I wonder if there is a way to reduce amount of boilerplate code when initializing many RabbitMQ queues/bindings in SpringBoot?
Following event-driven approach, my app produces like 50 types of events (it will be split into several smaller apps later, but still).
Each event goes to exchange with type "topic".
Some events are getting consumed by other apps, some events additionally consumed by the same app which is sending them.
Lets consider that publishing-and-self-consuming case.
In SpringBoot for each event I need to declare:
routing key name in config (like "event.item.purchased")
queue name to consume that event inside the same app
("queue.event.item.purchased")
matching configuration properties class field or a variable itemPurchasedRoutingKey or constant in code which keeps property name (like ${event.item.purchased})
bean for Queue creation (with a name featuring event name) like
itemPurchasedQueue
bean for Binding creation (with a name featuring
event name) and routing key name. like itemPurchasedBinding which is
constructed with itemPurchasedQueue.bind(...itemPurchasedRoutingKey)
RabbitListener for event, with annotation containing queue name
(can't be defined in runtime)
So - 6 places where "item purchased" is mentioned in one or another form.
The amount of boilerplate code is just killing me :)
If there are 50 events, its very easy to make a mistake - when adding new event, you need to remember to add it to 6 places.
Ideally, for each event I'd like to:
specify routing key in config. Queue name can be built upon it by appending common prefix (specific to the app).
use some annotation or alternative RabbitListener which automatically declares queue (by routing key + prefix), binds to it, and listens to events.
Is there a way to optimize it?
I thought about custom annotations, but RabbitListener doesn't like dynamic queue names, and spring boot can't find beans for queues and bindings if I declare them inside some util method.
Maybe there is a way to declare all that stuff in code, but it's not a Spring way, I believe :)
So I ended up using manual bean declaration and using 1 bind() method for each bean
#Configuration
#EnableConfigurationProperties(RabbitProperties::class)
class RabbitConfiguration(
private val properties: RabbitProperties,
private val connectionFactory: ConnectionFactory
) {
#Bean
fun admin() = RabbitAdmin(connectionFactory)
#Bean
fun exchange() = TopicExchange(properties.template.exchange)
#Bean
fun rabbitMessageConverter() = Jackson2JsonMessageConverter(
jacksonObjectMapper()
.registerModule(JavaTimeModule())
.registerModule(Jdk8Module())
.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES)
.enable(DeserializationFeature.READ_UNKNOWN_ENUM_VALUES_AS_NULL)
)
#Value("\${okko.rabbit.queue-prefix}")
lateinit var queuePrefix: String
fun <T> bind(routingKey: String, listener: (T) -> Mono<Void>): SimpleMessageListenerContainer {
val queueName = "$queuePrefix.$routingKey"
val queue = Queue(queueName)
admin().declareQueue(queue)
admin().declareBinding(BindingBuilder.bind(queue).to(exchange()).with(routingKey)!!)
val container = SimpleMessageListenerContainer(connectionFactory)
container.addQueueNames(queueName)
container.setMessageListener(MessageListenerAdapter(MessageHandler(listener), rabbitMessageConverter()))
return container
}
internal class MessageHandler<T>(private val listener: (T) -> Mono<Void>) {
// NOTE: don't change name of this method, rabbit needs it
fun handleMessage(message: T) {
listener.invoke(message).subscribeOn(Schedulers.elastic()).subscribe()
}
}
}
#Service
#Configuration
class EventConsumerRabbit(
private val config: RabbitConfiguration,
private val routingKeys: RabbitEventRoutingKeyConfig
) {
#Bean
fun event1() = handle(routingKeys.event1)
#Bean
fun event2() = handle(routingKeys.event2)
...
private fun<T> handle(routingKey: String): Mono<Void> = config.bind<T>(routingKey) {
log.debug("consume rabbit event: $it")
... // handle event, return Mono<Void>
}
companion object {
private val log by logger()
}
}
#Configuration
#ConfigurationProperties("my.rabbit.routing-key.event")
class RabbitEventRoutingKeyConfig {
lateinit var event1: String
lateinit var event2: String
...
}
Related
I need to publish multiple messages from the same project which represents employee journey events, and i need to use one topic only to publish these messages as they are representing the same project, but in some cases the message may contain extra fields for example:
All messages share (id, name, type, date) and
may some events have more fields like (course id, course name), so I am intending to use one parent object called "Journey", contains "Event" object, and I will create multiple children objects like 'LMSEvent' that extends this Event, etc if needed. Also using the Jackson + spring boot over rest APIs to do the needed cast based on type attribute. Finally, then this message to Kafka directly, so, each object contains its own properties.
For the consumer, I will do some strategy patterns and do the required logic per each type if needed.
The message size will not be very big and i don't expect to have more different attributes per each event.
I am looking to know if this approach is good or not and in case is not, what is the alternative.
I think that in general it is good approach. Having single message schema on topic or multiple schemas is always good question and both has some bright sights and drawbacks, you can read more about it in Martin Kleppmann article.
When you decided to have multiple events on single topic, starting from rest api and next by Kafka producer and consumer you can use the same approach of serializing and deserializing events, #JsonTypeInfo and #JsonSubTypes does the job:
#JsonTypeInfo(
use = JsonTypeInfo.Id.NAME,
include = JsonTypeInfo.As.EXISTING_PROPERTY,
property = "type")
#JsonSubTypes({
#JsonSubTypes.Type(value = LMSEvent.class, name = "LMSEvent"),
#JsonSubTypes.Type(value = YetAnotherEvent.class, name = "YetAnotherEvent")
})
public interface Event {
String getType();
default boolean hasType(String type) {
return getType().equalsIgnoreCase(type);
}
default <T> T getConcreteEvent(Class<T> clazz) {
return clazz.cast(this);
}
}
When you consume that type of messages using spring-kafka you can define some very neat code, where each method is consuming concrete event type, so you don't need to write some dirty casting by your own:
#KafkaListener(topics = "someEvents", containerFactory = "myKafkaContainerFactory")
public class MyKafkaHandler {
#KafkaHandler
void handleLMSEvent(LMSEvent event) {
....
}
#KafkaHandler
void handleYetAnotherEvent(YetAnotherEvent yetAnotherEvent) {
...
}
#KafkaHandler(isDefault = true)
void handleDefault(#Payload Object unknown,
#Header(KafkaHeaders.OFFSET) long offset,
#Header(KafkaHeaders.RECEIVED_PARTITION) int partitionId,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
logger.info("Server received unknown message {},{},{}", offset, partitionId, topic);
}
}
Full code
I'm able to make Spring+Rabbit work with the non-functional way (prior to 2.0?), but I'm trying to use with the functional pattern as the previous one is deprecated.
I've been following this doc: https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_binding_and_binding_names
The queue (consumer) is not being created in Rabbit with the new method. I can see the connection being created but without any consumer.
I have the following in my application.properties:
spring.cloud.stream.function.bindings.approved-in-0=approved
spring.cloud.stream.bindings.approved.destination=myTopic.exchange
spring.cloud.stream.bindings.approved.group=myGroup.approved
spring.cloud.stream.bindings.approved.consumer.back-off-initial-interval=2000
spring.cloud.stream.rabbit.bindings.approved.consumer.queueNameGroupOnly=true
spring.cloud.stream.rabbit.bindings.approved.consumer.bindingRoutingKey=myRoutingKey
which is replacing:
spring.cloud.stream.bindings.approved.destination=myTopic.exchange
spring.cloud.stream.bindings.approved.group=myGroup.approved
spring.cloud.stream.bindings.approved.consumer.back-off-initial-interval=2000
spring.cloud.stream.rabbit.bindings.approved.consumer.queueNameGroupOnly=true
spring.cloud.stream.rabbit.bindings.approved.consumer.bindingRoutingKey=myRoutingKey
And the new class
#Slf4j
#Service
public class ApprovedReceiver {
#Bean
public Consumer<String> approved() {
// I also saw that it's recommended to not use Consumer, but use Function instead
// https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_consumer_reactive
return value -> log.info("value: {}", value);
}
}
which is replacing
// BindableApprovedChannel.class
#Configuration
public interface BindableApprovedChannel {
#Input("approved")
SubscribableChannel getApproved();
}
// ApprovedReceiver.class
#Service
#EnableBinding(BindableApprovedChannel.class)
public class ApprovedReceiver {
#StreamListener("approved")
public void handleMessage(String payload) {
log.info("value: {}", payload);
}
}
Thanks!
If you have multiple beans of type Function, Supplier or Consumer (which could be declared by third party libraries), the framework does not know which one to bind to.
Try setting the spring.cloud.function.definition property to approved.
https://docs.spring.io/spring-cloud-stream/docs/3.1.3/reference/html/spring-cloud-stream.html#spring_cloud_function
In the event you only have single bean of type java.util.function.[Supplier/Function/Consumer], you can skip the spring.cloud.function.definition property, since such functional bean will be auto-discovered. However, it is considered best practice to use such property to avoid any confusion. Some time this auto-discovery can get in the way, since single bean of type java.util.function.[Supplier/Function/Consumer] could be there for purposes other then handling messages, yet being single it is auto-discovered and auto-bound. For these rare scenarios you can disable auto-discovery by providing spring.cloud.stream.function.autodetect property with value set to false.
Gary's answer is correct. If adding the definition property alone doesn't resolve the issue I would recommend sharing what you're doing for your supplier.
This is also a very helpful general discussion for transitioning from imperative to functional with links to repos with more in depth examples: EnableBinding is deprecated in Spring Cloud Stream 3.x
I'm using spring-integration-kafka.
I have an abstract interface Event, which has dozens of concrete implementations, say, AEvent, BEvent, CEvent, etc.. And I want one only consumer listener to handle all incoming Events, such as fun handle(message: Message<>) { message.payload... }
After reading the documentation, I find no way to support auto-deserialization with no explicit type provided in consumer side.
Any suggestion will be appreciated.
Using JdkSerializationRedisSerializer from spring-data-redis meet my requirements.
public class GenericObjectSerializer implements Serializer<Object> {
private final JdkSerializationRedisSerializer serializer =
new JdkSerializationRedisSerializer();
}
public class GenericObjectDeserializer implements Deserializer<Object> {
private final JdkSerializationRedisSerializer serializer =
new JdkSerializationRedisSerializer();
}
I know that the question is very big but I just want to clear the situation i am into.
I am working on an application that consumes the JMS messages from the message broker.
We are using camel route on the consumer side. All the object required in route builder are injected through constructor injection using spring .
I want to mock the behavior of the actual processing, Once the consumer receives the message from the queue. All the classes gets loaded via the spring configuration.
Below are the three classes:
CustomRouteBuilder.java
public CustomRouteBuilder extends RouteBuilder{
private CustomRouteAdapter customAdapter;
public CustomRouteBuilder (CustomRouteAdapter customAdapter){
this.customAdapter = customAdapter
}
public void configure(RouteDefinition route){
route.bean(customAdapter);
}
}
CustomRouteAdapter.java
public class CustomRouteAdapter {
private Orchestrator orchestrator;
public CustomRouteAdapter (Orchestrator orchestrator){
this.orchestrator = orchestrator;
}
#Handler
public void process(String message){
orchestrator.generate(message) ;
}
}
Orchestrator.java
public class Orchestrator{
private Service service;
public Orchestrator(Service service){
this.service = service;
}
public void generateData(String message){
service.process(message);
}
}
As per our requirement we have to load this configuration file and then write the functional test using spock.
Below is my
CustomRouteBuilderTest.groovy file.
import org.springframework.test.util.ReflectionTestUtils
import spock.lang.Specification
#ContextConfiguration(classes=[CustomRouteBuilderTest.Config.class])
class CustomRouteBuilderTest extends Specification{
private static final String message = "Hello";
Orchestrator orchestrator;
#Autowired
CustomRouteAdapter customRouteAdapter;
def setup(){
orchestrator = Mock(Orchestrator)
ReflectionTestUtils.setField(customRouteAdapter,"orchestrator",orchestrator)
orchestrator.generate(message )
}
private String getMessageAsJson() {
//return json string;
}
private String getMessage() {
// return message;
}
private Map<String, Object> doMakeHeaders() {
//Create message headers
}
private void doSendMessage(){
Thread.sleep(5000)
Map<String,Object> messageHeader = doMakeHeaders()
byte [] message = getMessageAsJson().getBytes()
CamelContext context = new DefaultCamelContext()
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(jmsBrokerUrl)
context.addComponent("activeMQComponent",JmsComponent.jmsComponent(connectionFactory))
ProducerTemplate template = context.createProducerTemplate()
context.start();
template.sendBodyAndHeaders("queueName", message, messageHeader)
}
def "test message consumption"(){
given:
doSendMessage()
}
#Configuration
#Import([FunctionalTestCommonConfig.class,CustomRouteBuilderConfig.class])
#PropertySource(value="classpath:test.properties")
static class Config{
}
}
The problem that here is even though I inject the mocked object to the adapter using ReflectionTestUtils , I am not able to define its behavior correctly.
And when the message is received the orchestrator tries to process it.
My Requirement is that:
Adapter should be called from the camel route automatically which happens but
when the orechestrator.generate is called from the adapter then nothing should happen it should simply return.
But here nothing like that is going on.
Each time I send a message the consumer(RouteBuilder) receives it and calls the handler function which then calls the
orchestrator.generate(message)
function and the orchestrator starts processing and throws an exception from service level.
Any one can please help me on this.
I suppose your beans have been proxified by Spring, and this proxy use cglib (because you see CustomRouteBuilder$$EnhancerBySpringCGLIB$$ad2783ae).
If it's really the case, you didn't #Autowired in your test the real instance of your CustomRouteAdapter but a cglib proxy: Spring creates a new class, extending the realclass, and overriding all the methods of this class. The new method delegate to the real instance.
When you change the orchestrator field, you are in reality changing the orchestrator field of the proxy, which is not used by the real instance.
There are severals ways to achieve what you want to do:
add a setOrchestrator method in CustomRouteAdapter
create the mock in your spring configuration and let spring inject this mock instead of a real instance of Orchestrator
Inject the orchestrator in the real instance (ugly - I didn't recommend you that, it didn't help in the testability of your code!)
customRouteAdapter.targetSource.target.orchestrator = _themock_
I'm currently working on a rabbit-amqp implementation project and use spring-rabbit to programmatically setup all my queues, bindings and exchanges. (spring-rabbit-1.3.4 and spring-framework versions 3.2.0)
The declaration in a javaconfiguration class or xml-based configuration are both quite static in my opinion declared. I know how to set a more dynamic value (ex. a name) for a queue, exchange
or binding like this:
#Configuration
public class serverConfiguration {
private String queueName;
...
#Bean
public Queue buildQueue() {
Queue queue = new Queue(this.queueName, false, false, true, getQueueArguments());
buildRabbitAdmin().declareQueue(queue);
return queue;
}
...
}
But I was wondering if it was possible to create a undefined amount instances of Queue and
register them as beans like a factory registering all its instances.
I'm not really familiar with the Spring #Bean annotation and its limitations, but I tried
#Configuration
public class serverConfiguration {
private String queueName;
...
#Bean
#Scope("prototype")
public Queue buildQueue() {
Queue queue = new Queue(this.queueName, false, false, true, getQueueArguments());
buildRabbitAdmin().declareQueue(queue);
return queue;
}
...
}
And to see if the multiple beans instances of Queue are registered I call:
Map<String, Queue> queueBeans = ((ListableBeanFactory) applicationContext).getBeansOfType(Queue.class);
But this will only return 1 mapping:
name of the method := the last created instance.
Is it possible to dynamically add beans during runtime to the SpringApplicationContext?
You can add beans dynamically to the context:
context.getBeanFactory().registerSingleton("foo", new Queue("foo"));
but they won't be declared by the admin automatically; you will have to call admin.initialize() to force it to re-declare all the AMQP elements in the context.
You would not do either of these in #Beans, just normal runtime java code.