Spring boot stream bind queue with multiple routing keys - spring-boot

I need to bind single queue with multiple routing keys.
I have configuration in application.properties:
spring.cloud.stream.bindings.some-channel1.destination=exch
spring.cloud.stream.bindings.some-channel1.group=a-queue
spring.cloud.stream.rabbit.bindings.some-channel1.consumer.binding-routing-key=event.domain1
spring.cloud.stream.bindings.some-channel2.destination=exch
spring.cloud.stream.bindings.some-channel2.group=a-queue
spring.cloud.stream.rabbit.bindings.some-channel2.consumer.binding-routing-key=event.domain2
This creates queue and bindings properly in rabbit, but finally after running application I got:
org.springframework.cloud.stream.binder.BinderException: Exception thrown while starting consumer:
After all above configuration i still bad because I need single channel. But queue binded with list of routing keys.
Any Ideas how to configure it?

You can't do it with stream properties, but you can always add extra bindings with normal Spring AMQP declarations...
#SpringBootApplication
#EnableBinding(Sink.class)
public class So50526298Application {
public static void main(String[] args) {
SpringApplication.run(So50526298Application.class, args);
}
#StreamListener(Sink.INPUT)
public void listen(String in) {
System.out.println(in);
}
// extra bindings...
#Bean
public TopicExchange exch() {
return new TopicExchange("exch");
}
#Bean
public Queue queue() {
return new Queue("exch.a-queue");
}
#Bean
public Binding extraBinding1() {
return BindingBuilder.bind(queue()).to(exch()).with("event-domain2");
}
}
There is also a third party "advanced" boot starter that allows you to add declarations in a yaml file. I haven't tried it, but it looks interesting.

Related

spring-kafka: DefaultErrorHandler with DeadLetterPublishingRecoverer(BiFunction) not considered. No DL topic created

In my Spring Boot application using spring-kafka, I am trying to configure an error handler with 2 things:-
Retry message consumption failures a certain times (FixedBackOff) before publishing to a dead letter topic
Create a dead letter topic with a name of my choice
Using
// Version highlights
id 'org.springframework.boot' version '2.7.2'
...
implementation 'org.springframework.kafka:spring-kafka' // 2.8.8
Here is the code I am using based on what I read in Spring docs and reiterated in several articles online:
#Bean
public DefaultErrorHandler byteArrayDefaultErrorHandler(KafkaTemplate<String, byte[]> template) {
var recoverer =
new DeadLetterPublishingRecoverer(
template,
(record, e) -> new TopicPartition("%s.deadLetter".formatted(record.topic()), 0);
);
return new DefaultErrorHandler(recoverer, new FixedBackOff(3000, 3));
}
But the above bean is not considered/used. So, when consumption encounters a failure (currently simulating failure by throwing an exception),
the FixedBackOff is not considered but the default one with 10 attempts back to back is used.
No DL topic is created.
Currently, the consumer config class has minimal stuff:
#Bean public ConsumerFactory<String, byte[]> byteArrayConsumerFactory() { ... }
#Bean public ConcurrentKafkaListenerContainerFactory<String, byte[]> byteArrayListenerContainerFactory() { .. }
#Bean public DefaultErrorHandler byteArrayDefaultErrorHandler(KafkaTemplate<String, byte[]> template) { ...code pasted above... }
And the listener is as follows:
#KafkaListener(
topics = "${app.config.kafka.topic}",
containerFactory = "byteArrayListenerContainerFactory"
)
public void consumeMessage(ConsumerRecord<String, byte[]> record) { ... }
Am at a loss figuring out what I have missed or added something conflicting the wiring. Help figuring out is highly appreciated.
The error handler bean will only be wired in by boot if you are using Boot's auto configured container factory.
Since you are creating your own container factory bean...
#Bean public ConcurrentKafkaListenerContainerFactory<String, byte[]> byteArrayListenerContainerFactory() { .. }
...you must add the error handler yourself - see setCommonErrorHandler().
The framework does not automatically provision the dead letter topic; add a #Bean NewTopic dlt() { ... }.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#configuring-topics

Spring Kafka Requirements for Supporting Multiple Consumers

As one would expect its common to want to have different Consumers deserializing in different ways off topics in Kafka. There is a known problem with spring boot autoconfig. It seems that as soon as other factories are defined Spring Kafka or the autoconfig complains about not being able to find a suitable consumer factory anymore. Some have pointed out that one solution is to include a ConsumerFactory of type (Object, Object) in the config. But no one has shown the source code for this or clarified if it needs to be named in any particular way. Or if simply adding this Consumer to the config removes the need to turn off autoconfig. All that remains very unclear.
If you are not familiar with this issue please read https://github.com/spring-projects/spring-boot/issues/19221
Where it was just stated ok, define the ConsumerFactory and add it somewhere in your config. Can someone be a bit more precise about this please.
Show exactly how to define the ConsumerFactory so that Spring boot autoconfig will not complain.
Explain if turning off autoconfig is or is not needed?
Explain if Consumer Factory needs to be named in any special way or not.
The simplest solution is to stick with Boot's auto-configuration and override the deserializer on the #KafkaListener itself...
#SpringBootApplication
public class So63108344Application {
public static void main(String[] args) {
SpringApplication.run(So63108344Application.class, args);
}
#KafkaListener(id = "so63108344-1", topics = "so63108344-1")
public void listen1(String in) {
System.out.println(in);
}
#KafkaListener(id = "so63108344-2", topics = "so63108344-2", properties =
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG +
"=org.apache.kafka.common.serialization.ByteArrayDeserializer")
public void listen2(byte[] in) {
System.out.println(in);
}
#Bean
public NewTopic topic1() {
return TopicBuilder.name("so63108344-1").partitions(1).replicas(1).build();
}
#Bean
public NewTopic topic2() {
return TopicBuilder.name("so63108344-2").partitions(1).replicas(1).build();
}
}
For more advanced container customization (or if you don't want to pollute the #KafkaListener, you can use a ContainerCustomizer...
#Component
class Customizer {
public Customizer(ConcurrentKafkaListenerContainerFactory<?, ?> factory) {
factory.setContainerCustomizer(container -> {
if (container.getGroupId().equals("so63108344-2")) {
container.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
container.getContainerProperties().getKafkaConsumerProperties()
.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArrayDeserializer");
}
});
}
}

WebSocket messages are not delivered all the times

I have an application with WebSockets using spring-boot application as backend and Stomp/SockJS in the client side, the spring-boot application consume JMS queue messages and notify the changes to the right user. What is the problem? Sometimes works and sometimes doesn't work, same code and users could work or not.
The client side code is a bit more difficult to copy here because it's integrate over react/redux application but basically is a subscription to two different channels, both defined in the configuration of Spring. The sessions are created correctly according to debug information but just sometimes the message is processed to send it to connected sessions.
This is the configuration class for Spring.
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfiguration implements WebSocketMessageBrokerConfigurer {
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry
.addEndpoint("/stomp")
.setAllowedOrigins("*")
.withSockJS();
}
public void configureMessageBroker(MessageBrokerRegistry registry) {
registry
.setApplicationDestinationPrefixes("/app")
.enableSimpleBroker("/xxxx/yyyy", "/ccccc");
}
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.interceptors(new ChannelInterceptor() {
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
StompHeaderAccessor accessor =
MessageHeaderAccessor.getAccessor(message, StompHeaderAccessor.class);
if (StompCommand.CONNECT.equals(accessor.getCommand())) {
Object raw = message
.getHeaders()
.get(SimpMessageHeaderAccessor.NATIVE_HEADERS);
if (raw instanceof Map) {
Object name = ((Map<?,?>) raw).get("email");
if (name instanceof LinkedList) {
String user = ((LinkedList<?>) name).get(0).toString();
accessor.setUser(new User(user));
}
}
}
return message;
}
});
}
}
This is the JMS listener to process queue message and send it to specific user.
#Component
public class UserEventListener {
private final Logger logger = LoggerFactory.getLogger(getClass());
private final SimpMessagingTemplate template;
#Autowired
public UserEventListener(SimpMessagingTemplate pTemplate) {
this.template = pTemplate;
}
#JmsListener(destination="user/events")
public void onStatusChange(Map<String, Object> props) {
if (props.containsKey("userEmail")) {
logger.debug("Event for user received: {}", props.get("userEmail"));
template.convertAndSendToUser((String)props.get("userEmail"), "/ccccc", props);
}
}
}
Edit 1:
After more debugging the times when doesn't work the "session" for WebSocket seems to be lost by Spring configuration. I don't see any log information about "Disconnected" messages or something similar, besides if I debug remotely the server when this happens the problem doesn't appears during debugging session. Some idea? The class from Spring where session disappear is DefaultSimpUserRegistry.
After more research I found a question with the same problem and the solution here. Basically the conclusion is this:
Channel interceptor is not the right place to authenticate user, we need to change it with a custom handshake handler.

Listening to context Refreshed in Spring cloud consul config

Spring cloud consul config allows dynamic refresh of properties whenever the property is changed in consul. Is there a way to listen whenever the change happens?
#Component
public class ContextRefreshListener {
#EventListener
public void handleContextRefresh(ContextRefreshedEvent event) {
System.out.println("refreshed");
}
#EventListener
public void handleContextStart(ContextStartedEvent event) {
System.out.println("started");
}
#EventListener
public void handleContextRefresh(ApplicationContextEvent event) {
System.out.println("context");
}
}
I tried the above three events, but no luck. Is there any way to do listen to events whenever the refresh happens?
I was able to do it by following way
#EventListener
public void handleContextStart(EnvironmentChangeEvent event) {
System.out.println("changed");
//Use this for getting the version from consul
}

Spring Zuul: Dynamically disable a route to a service

I'm trying to disable a Zuul route to a microservice registered with Eureka at runtime (I'm using spring boot).
This is an example:
localhost/hello
localhost/world
Those two are the registered microservices. I would like to disable the route to one of them at runtime without shutting it down.
Is there a way to do this?
Thank you,
Nano
Alternatively to using Cloud Config, custom ZuulFilter can be used. Something like (partial implementation to show the concept):
public class BlackListFilter extends ZuulFilter {
#Override
public String filterType() {
return "pre";
}
...
#Override
public Object run() {
RequestContext ctx = RequestContext.getCurrentContext();
String uri = ctx.getRequest().getRequestURI();
String appId = uri.split("/")[1];
if (blackList.contains(appId)) {
ctx.setSendZuulResponse(false);
LOG.info("Request '{}' from {}:{} is blocked",
uri, ctx.getRequest().getRemoteHost(), ctx.getRequest().getRemotePort());
}
return null;
}
}
where blackList contains list of application IDs (Spring Boot application name) managed for example via some RESTful API.
After a lot of efforts I came up with this solution. First, I used Netflix Archaius to watch a property file. Then I proceeded as follows:
public class ApplicationRouteLocator extends SimpleRouteLocator implements RefreshableRouteLocator {
public ApplicationRouteLocator(String servletPath, ZuulProperties properties) {
super(servletPath, properties );
}
#Override
public void refresh() {
doRefresh();
}
}
Made the doRefresh() method public by extending SimpleRouteLocator and calling its method in the overridden one of the interface RefreshableRouteLocator.
Then I redefined the bean RouteLocator with my custom implementation:
#Configuration
#EnableConfigurationProperties( { ZuulProperties.class } )
public class ZuulConfig {
public static ApplicationRouteLocator simpleRouteLocator;
#Autowired
private ZuulProperties zuulProperties;
#Autowired
private ServerProperties server;
#Bean
#Primary
public RouteLocator routeLocator() {
logger.info( "zuulProperties are: {}", zuulProperties );
simpleRouteLocator = new ApplicationRouteLocator( this.server.getServletPrefix(),
this.zuulProperties );
ConfigurationManager.getConfigInstance().addConfigurationListener( configurationListener );
return simpleRouteLocator;
}
private ConfigurationListener configurationListener =
new ConfigurationListener() {
#Override
public void configurationChanged( ConfigurationEvent ce ) {
// zuulProperties.getRoutes() do something
// zuulProperties.getIgnoredPatterns() do something
simpleRouteLocator.refresh();
}
}
}
Every time a property in the file was modified an event was triggered and the ConfigurationEvent was able to deal with it (getPropertyName() and getPropertyValue() to extract data from the event). Since I also Autowired the ZuulProperties I was able to get access to it. With the right rule I could find whether the property of Zuul
zuul.ignoredPatterns
was modified changing its value in the ZuulProperties accordingly.
Here refresh context should work (as long as you are not adding a new routing rule or removing a currently existing one), if you are adding or removing routing rules, you have to add a new bean for ZuulProperties and mark it with #RefreshScope, #Primary.
You can autowire refreshEndpoint bean for example and apply refreshEndpoint.refresh() on the listener.
Marking a custom RouteLocator as primary will cause problems as zuul already has bean of same type marked as primary.

Resources