Setting authorizationExceptionRetryInterval for Spring Kafka - spring-boot

Anyone know how to set the new property: authorizationExceptionRetryInterval without creating the ConcurrentKafkaListenerContainerFactory manually.

I was going to say...
#Component
class ContainerFactoryCustomizer {
ContainerFactoryCustomizer(AbstractKafkaListenerContainerFactory<?, ?, ?> factory) {
factory.setContainerCustomizer(
container -> container.getContainerProperties()
.setAuthorizationExceptionRetryInterval(Duration.ofSeconds(10L)));
}
}
But that doesn't work, due to a bug (the container customizer is not set up).
Here is a work-around:
#SpringBootApplication
public class So60054097Application {
public static void main(String[] args) {
SpringApplication.run(So60054097Application.class, args);
}
#KafkaListener(id = "so60054097", topics = "so60054097", autoStartup = "false")
public void listen(String in) {
System.out.println(in);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so60054097").partitions(1).replicas(1).build();
}
#Bean
public ApplicationRunner runner(KafkaListenerEndpointRegistry registry) {
return args -> {
MessageListenerContainer container = registry.getListenerContainer("so60054097");
container.getContainerProperties()
.setAuthorizationExceptionRetryInterval(Duration.ofSeconds(10L));
container.start();
};
}
}
(Set autoStartup to false; fix the property and start the container).

Related

Spring Boot Kafka Configure DefaultErrorHandler?

I created a batch-consumer following the Spring Kafka docs:
#SpringBootApplication
public class ApplicationConsumer {
private static final Logger LOGGER = LoggerFactory.getLogger(ApplicationConsumer.class);
private static final String TOPIC = "foo";
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(ApplicationConsumer.class, args);
}
#Bean
public RecordMessageConverter converter() {
return new JsonMessageConverter();
}
#Bean
public BatchMessagingMessageConverter batchConverter() {
return new BatchMessagingMessageConverter(converter());
}
#KafkaListener(topics = TOPIC)
public void listen(List<Name> ps) {
LOGGER.info("received name beans: {}", Arrays.toString(ps.toArray()));
}
}
I was able to successfully get the consumer running by defining the following additional configuration env variables, that Spring automatically picks up:
export SPRING_KAFKA_BOOTSTRAP-SERVERS=...
export SPRING_KAFKA_CONSUMER_GROUP-ID=...
So the above code works. But now I want to customize the default error handler to use exponential backoff. From the ref docs I tried adding the following to ApplicationConsumer class:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setCommonErrorHandler(new DefaultErrorHandler(new ExponentialBackOffWithMaxRetries(10)));
factory.setConsumerFactory(consumerFactory());
return factory;
}
#Bean
public ConsumerFactory<String, Object> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
return props;
}
But now I get errors saying that it can't find some of the configuration. It looks like I'm stuck having to redefine all of the properties in consumerConfigs() that were already being automatically defined before. This includes everything from bootstrap server uris to the json-deserialization config.
Is there a good way to update my first version of the code to just override the default-error handler?
Just define the error handler as a #Bean and Boot will automatically wire it into its auto configured container factory.
EDIT
This works as expected for me:
#SpringBootApplication
public class So70884203Application {
public static void main(String[] args) {
SpringApplication.run(So70884203Application.class, args);
}
#Bean
DefaultErrorHandler eh() {
return new DefaultErrorHandler((rec, ex) -> {
System.out.println("Recovered: " + rec);
}, new FixedBackOff(0L, 0L));
}
#KafkaListener(id = "so70884203", topics = "so70884203")
void listen(String in) {
System.out.println(in);
throw new RuntimeException("test");
}
#Bean
NewTopic topic() {
return TopicBuilder.name("so70884203").partitions(1).replicas(1).build();
}
}
foo
Recovered: ConsumerRecord(topic = so70884203, partition = 0, leaderEpoch = 0, offset = 0, CreateTime = 1643316625291, serialized key size = -1, serialized value size = 3, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = foo)

How to configure spring integration adapters of a merely connecting client and a server sending messages

I'm trying to implement the following scenario using Spring Integration:
I need a client to connect to a server via TCP IP and wait to receive messages within 30 seconds.
I need a server to send 0 to n messages to the client which had connected.
I need a way to start and stop channel transfer without loss of messages.
I need to change the port the server is listening between stop and start.
This is my config so far:
#Configuration
public class TcpConfiguration {
private static Logger LOG = LoggerFactory.getLogger(TcpConfiguration.class);
#Value("${port}")
private Integer port;
#Value("${so-timeout}")
private Integer soTimeout;
#Value("${keep-alive}")
private Boolean keepAlive;
#Value("${send-timeout}")
private Integer sendTimeout;
#Bean
public AbstractServerConnectionFactory getMyConnFactory() {
LOG.debug("getMyConnFactory");
TcpNetServerConnectionFactory factory = new TcpNetServerConnectionFactory(port);
LOG.debug("getMyConnFactory port={}", port);
factory.setSoTimeout(soTimeout);
LOG.debug("getMyConnFactory soTimeout={}", soTimeout);
factory.setSoKeepAlive(true);
LOG.debug("getMyConnFactory keepAlive={}", keepAlive);
return factory;
}
#Bean
public AbstractEndpoint getMyChannelAdapter() {
LOG.debug("getMyChannelAdapter");
TcpReceivingChannelAdapter adapter = new TcpReceivingChannelAdapter();
adapter.setConnectionFactory(getMyConnFactory());
adapter.setOutputChannel(myChannelIn());
adapter.setSendTimeout(sendTimeout);
LOG.debug("getMyChannelAdapter adapter={}", adapter.getClass().getName());
return adapter;
}
#Bean
public MessageChannel myChannelIn() {
LOG.debug("myChannelIn");
return new DirectChannel();
}
#Bean
#Transformer(inputChannel = "myChannelIn", outputChannel = "myServiceChannel")
public ObjectToStringTransformer myTransformer() {
LOG.debug("myTransformer");
return new ObjectToStringTransformer();
}
#ServiceActivator(inputChannel = "myServiceChannel")
public void service(String in) {
LOG.debug("service received={}", in);
}
#Bean
public MessageChannel myChannelOut() {
LOG.debug("myChannelOut");
return new DirectChannel();
}
#Bean
public IntegrationFlow myOutbound() {
LOG.debug("myOutbound");
return IntegrationFlows.from(myChannelOut())
.handle(mySender())
.get();
}
#Bean
public MessageHandler mySender() {
LOG.debug("mySender");
TcpSendingMessageHandler tcpSendingMessageHandler = new TcpSendingMessageHandler();
tcpSendingMessageHandler.setConnectionFactory(getMyConnFactory());
return tcpSendingMessageHandler;
}
}
Please advice!
To change the server port I would shutdown the application context and restart it after configuring the new port in a remote configuration server.
Can I just close the application context without corrupting the current message transfer?
I don't know how to handle the connect-only client thing.
Use dynamic flow registration; just get the connection to open it without sending.
#SpringBootApplication
public class So62867670Application {
public static void main(String[] args) {
SpringApplication.run(So62867670Application.class, args);
}
#Bean
public ApplicationRunner runner(DynamicTcpReceiver receiver) {
return args -> { // Just a demo to show starting/stopping
receiver.connectAndListen(1234);
System.in.read();
receiver.stop();
System.in.read();
receiver.connectAndListen(1235);
System.in.read();
receiver.stop();
};
}
}
#Component
class DynamicTcpReceiver {
#Autowired
private IntegrationFlowContext context;
private IntegrationFlowRegistration registration;
public void connectAndListen(int port) throws InterruptedException {
TcpClientConnectionFactorySpec client = Tcp.netClient("localhost", port)
.deserializer(TcpCodecs.lf());
IntegrationFlow flow = IntegrationFlows.from(Tcp.inboundAdapter(client))
.transform(Transformers.objectToString())
.handle(System.out::println)
.get();
this.registration = context.registration(flow).register();
client.get().getConnection(); // just open the single shared connection
}
public void stop() {
if (this.registration != null) {
this.registration.destroy();
this.registration = null;
}
}
}
EDIT
And this is the server side...
#SpringBootApplication
#EnableScheduling
public class So62867670ServerApplication {
public static void main(String[] args) {
SpringApplication.run(So62867670ServerApplication.class, args);
}
#Bean
public ApplicationRunner runner(DynamicTcpServer receiver) {
return args -> { // Just a demo to show starting/stopping
receiver.tcpListen(1234);
System.in.read();
receiver.stop(1234);
System.in.read();
receiver.tcpListen(1235);
System.in.read();
receiver.stop(1235);
};
}
}
#Component
class DynamicTcpServer {
private static final Logger LOG = LoggerFactory.getLogger(DynamicTcpServer.class);
#Autowired
private IntegrationFlowContext flowContext;
#Autowired
private ApplicationContext appContext;
private final Map<Integer, IntegrationFlowRegistration> registrations = new HashMap<>();
private final Map<String, Entry<Integer, AtomicInteger>> clients = new ConcurrentHashMap<>();
public void tcpListen(int port) {
TcpServerConnectionFactorySpec server = Tcp.netServer(port)
.id("server-" + port)
.serializer(TcpCodecs.lf());
server.get().registerListener(msg -> false); // dummy listener so the accept thread doesn't exit
IntegrationFlow flow = f -> f.handle(Tcp.outboundAdapter(server));
this.registrations.put(port, flowContext.registration(flow).register());
}
public void stop(int port) {
IntegrationFlowRegistration registration = this.registrations.remove(port);
if (registration != null) {
registration.destroy();
}
}
#EventListener
public void closed(TcpConnectionOpenEvent event) {
LOG.info(event.toString());
String connectionId = event.getConnectionId();
String[] split = connectionId.split(":");
int port = Integer.parseInt(split[2]);
this.clients.put(connectionId, new AbstractMap.SimpleEntry<>(port, new AtomicInteger()));
}
#EventListener
public void closed(TcpConnectionCloseEvent event) {
LOG.info(event.toString());
this.clients.remove(event.getConnectionId());
}
#EventListener
public void listening(TcpConnectionServerListeningEvent event) {
LOG.info(event.toString());
}
#Scheduled(fixedDelay = 5000)
public void sender() {
this.clients.forEach((connectionId, portAndCount) -> {
IntegrationFlowRegistration registration = this.registrations.get(portAndCount.getKey());
if (registration != null) {
LOG.info("Sending to " + connectionId);
registration.getMessagingTemplate().send(MessageBuilder.withPayload("foo")
.setHeader(IpHeaders.CONNECTION_ID, connectionId).build());
if (portAndCount.getValue().incrementAndGet() > 9) {
this.appContext.getBean("server-" + portAndCount.getKey(), TcpNetServerConnectionFactory.class)
.closeConnection(connectionId);
}
}
});
}
}

Give Priority to SFTP Remote Directories

Using single SFTP channel I need to process two remote directories lowpriority and highprioiry but lowpriority files pick after the highpriority .
please let know how handle multiple directories in SFTP inbound adapter with single channel ?
We can do using https://docs.spring.io/spring-integration/reference/html/sftp.html#sftp-rotating-server-advice Rotation Service advice in Spring 5.1.2 Release but what about 4.3.12 Release.?
It is not available in 4.3.x; the feature was added in 5.0.7.
It needs infrastructure changes so it will be hard to replicate with custom code in 4.3.x.
You could use two adapters and stop/start them as necessary.
EDIT
Here is one solution; the advice on the primary flow starts the secondary flow when no new files are found. The secondary flow runs just once, then restarts the primary flow; and the cycle continues...
#SpringBootApplication
public class So54329898Application {
public static void main(String[] args) {
SpringApplication.run(So54329898Application.class, args);
}
#Bean
public IntegrationFlow primary(SessionFactory<LsEntry> sessionFactory) {
return IntegrationFlows.from(Sftp.inboundAdapter(sessionFactory)
.localDirectory(new File("/tmp/foo"))
.remoteDirectory("foo/foo"), e -> e
.poller(Pollers.fixedDelay(5_000, 5_000)
.advice(startSecondaryAdvice())))
.channel("channel")
.get();
}
#Bean
public IntegrationFlow secondary(SessionFactory<LsEntry> sessionFactory) {
return IntegrationFlows.from(Sftp.inboundAdapter(sessionFactory)
.localDirectory(new File("/tmp/foo"))
.remoteDirectory("foo/bar"), e -> e
.poller(Pollers.trigger(oneShotTrigger(sessionFactory)))
.autoStartup(false))
.channel("channel")
.get();
}
#Bean
public IntegrationFlow main() {
return IntegrationFlows.from("channel")
.handle(System.out::println)
.get();
}
#Bean
public Advice startSecondaryAdvice() {
return new StartSecondaryWhenPrimaryIdle();
}
#Bean
public FireOnceTrigger oneShotTrigger(SessionFactory<LsEntry> sessionFactory) {
return new FireOnceTrigger((Lifecycle) primary(sessionFactory));
}
public static class StartSecondaryWhenPrimaryIdle extends AbstractMessageSourceAdvice
implements ApplicationContextAware {
private ApplicationContext applicationContext;
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
System.out.println("No more files on primary; starting single shot on secondary");
this.applicationContext.getBean("primary", Lifecycle.class).stop();
this.applicationContext.getBean("secondary", Lifecycle.class).stop();
this.applicationContext.getBean(FireOnceTrigger.class).reset();
this.applicationContext.getBean("secondary", Lifecycle.class).start();
}
return result;
}
}
public static class FireOnceTrigger implements Trigger {
private final Lifecycle primary;
private volatile boolean done;
public FireOnceTrigger(Lifecycle primary) {
this.primary = primary;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
if (done) {
System.out.println("One shot on secondary complete; restarting primary");
this.primary.start();
return null;
}
done = true;
return new Date();
}
public void reset() {
done = false;
}
}
}

How to build a nonblocking Consumer when using AsyncRabbitTemplate with Request/Reply Pattern

I'm new to rabbitmq and currently trying to implement a nonblocking producer with a nonblocking consumer. I've build some test producer where I played around with typereference:
#Service
public class Producer {
#Autowired
private AsyncRabbitTemplate asyncRabbitTemplate;
public <T extends RequestEvent<S>, S> RabbitConverterFuture<S> asyncSendEventAndReceive(final T event) {
return asyncRabbitTemplate.convertSendAndReceiveAsType(QueueConfig.EXCHANGE_NAME, event.getRoutingKey(), event, event.getResponseTypeReference());
}
}
And in some other place the test function that gets called in a RestController
#Autowired
Producer producer;
public void test() throws InterruptedException, ExecutionException {
TestEvent requestEvent = new TestEvent("SOMEDATA");
RabbitConverterFuture<TestResponse> reply = producer.asyncSendEventAndReceive(requestEvent);
log.info("Hello! The Reply is: {}", reply.get());
}
This so far was pretty straightforward, where I'm stuck now is how to create a consumer which is non-blocking too. My current listener:
#RabbitListener(queues = QueueConfig.QUEUENAME)
public TestResponse onReceive(TestEvent event) {
Future<TestResponse> replyLater = proccessDataLater(event.getSomeData())
return replyLater.get();
}
As far as I'm aware, when using #RabbitListener this listener runs in its own thread. And I could configure the MessageListener to use more then one thread for the active listeners. Because of that, blocking the listener thread with future.get() is not blocking the application itself. Still there might be the case where all threads are blocking now and new events are stuck in the queue, when they maybe dont need to. What I would like to do is to just receive the event without the need to instantly return the result. Which is probably not possible with #RabbitListener. Something like:
#RabbitListener(queues = QueueConfig.QUEUENAME)
public void onReceive(TestEvent event) {
/*
* Some fictional RabbitMQ API call where i get a ReplyContainer which contains
* the CorrelationID for the event. I can call replyContainer.reply(testResponse) later
* in the code without blocking the listener thread
*/
ReplyContainer replyContainer = AsyncRabbitTemplate.getReplyContainer()
// ProcessDataLater calls reply on the container when done with its action
proccessDataLater(event.getSomeData(), replyContainer);
}
What is the best way to implement such behaviour with rabbitmq in spring?
EDIT Config Class:
#Configuration
#EnableRabbit
public class RabbitMQConfig implements RabbitListenerConfigurer {
public static final String topicExchangeName = "exchange";
#Bean
TopicExchange exchange() {
return new TopicExchange(topicExchangeName);
}
#Bean
public ConnectionFactory rabbitConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost("localhost");
return connectionFactory;
}
#Bean
public MappingJackson2MessageConverter consumerJackson2MessageConverter() {
return new MappingJackson2MessageConverter();
}
#Bean
public RabbitTemplate rabbitTemplate() {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(rabbitConnectionFactory());
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
return rabbitTemplate;
}
#Bean
public AsyncRabbitTemplate asyncRabbitTemplate() {
return new AsyncRabbitTemplate(rabbitTemplate());
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
#Bean
Queue queue() {
return new Queue("test", false);
}
#Bean
Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("foo.#");
}
#Bean
public SimpleRabbitListenerContainerFactory myRabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory());
factory.setMaxConcurrentConsumers(5);
factory.setMessageConverter(producerJackson2MessageConverter());
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return factory;
}
#Override
public void configureRabbitListeners(final RabbitListenerEndpointRegistrar registrar) {
registrar.setContainerFactory(myRabbitListenerContainerFactory());
}
}
I don't have time to test it right now, but something like this should work; presumably you don't want to lose messages so you need to set the ackMode to MANUAL and do the acks yourself (as shown).
UPDATE
#SpringBootApplication
public class So52173111Application {
private final ExecutorService exec = Executors.newCachedThreadPool();
#Autowired
private RabbitTemplate template;
#Bean
public ApplicationRunner runner(AsyncRabbitTemplate asyncTemplate) {
return args -> {
RabbitConverterFuture<Object> future = asyncTemplate.convertSendAndReceive("foo", "test");
future.addCallback(r -> {
System.out.println("Reply: " + r);
}, t -> {
t.printStackTrace();
});
};
}
#Bean
public AsyncRabbitTemplate asyncTemplate(RabbitTemplate template) {
return new AsyncRabbitTemplate(template);
}
#RabbitListener(queues = "foo")
public void listen(String in, Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) long tag,
#Header(AmqpHeaders.CORRELATION_ID) String correlationId,
#Header(AmqpHeaders.REPLY_TO) String replyTo) {
ListenableFuture<String> future = handleInput(in);
future.addCallback(result -> {
Address address = new Address(replyTo);
this.template.convertAndSend(address.getExchangeName(), address.getRoutingKey(), result, m -> {
m.getMessageProperties().setCorrelationId(correlationId);
return m;
});
try {
channel.basicAck(tag, false);
}
catch (IOException e) {
e.printStackTrace();
}
}, t -> {
t.printStackTrace();
});
}
private ListenableFuture<String> handleInput(String in) {
SettableListenableFuture<String> future = new SettableListenableFuture<String>();
exec.execute(() -> {
try {
Thread.sleep(2000);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
future.set(in.toUpperCase());
});
return future;
}
public static void main(String[] args) {
SpringApplication.run(So52173111Application.class, args);
}
}

Springboot RabbitMq Consumer consuming only at first time.please help me to get the configurations right and how can configure multiple listners

Springboot RabbitMq Consumer consuming only at first time.please help me to get the configurations right and how can configure multiple listners.I am developing an Automated Mapping solution for which i Execute various jobs .and in a need to put those jobs in queue
Here's my application class:Application.java
#SpringBootApplication
#ComponentScan(basePackages = { "com.fractal.sago", "com.fractal.grpc" })
public class Application extends SpringBootServletInitializer {
public static void main(String[] args) throws Exception {
SpringApplication.run(Application.class, args);
}
RabbitMqConfig class:
#Configuration
public class RabbitMqConfig {
public final static String JOB_QUEUE_NAME = "jobQueue";
public final static String JOB_EXCHANGE_NAME = "jobExchange";
#Bean
Queue jobQueue() {
return new Queue(JOB_QUEUE_NAME, true);
}
#Bean
DirectExchange jobExchange() {
return new DirectExchange(JOB_EXCHANGE_NAME);
}
#Bean
Binding jobBinding(DirectExchange directExchange) {
return BindingBuilder.bind(jobQueue()).to(jobExchange()).with(jobQueue().getName());
}
#Bean
SimpleMessageListenerContainer jobQueueContainer(ConnectionFactory connectionFactory,
MessageListenerAdapter joblistenerAdapter) {
SimpleMessageListenerContainer jobQueueContainer = new SimpleMessageListenerContainer();
jobQueueContainer.setConnectionFactory(connectionFactory);
jobQueueContainer.setQueueNames(JOB_QUEUE_NAME);
jobQueueContainer.setMessageListener(joblistenerAdapter);
return jobQueueContainer;
}
#Bean
MessageListenerAdapter joblistenerAdapter(JobQueueConsumer messageReceiver) {
MessageListenerAdapter messageListenerAdapter = new MessageListenerAdapter(messageReceiver, "receiveMessage");
messageListenerAdapter.setMessageConverter(producerJackson2MessageConverter());
return messageListenerAdapter;
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
}
producer : JobProducer
#Component
public class JobQueueProducer {
#Autowired
RabbitTemplate rabbitTemplate;
public void sendMessage(String message) {
Message messageToSend = MessageBuilder.withBody(message.getBytes())
.setDeliveryMode(MessageDeliveryMode.PERSISTENT).build();
rabbitTemplate.convertAndSend(RabbitMqConfig.JOB_EXCHANGE_NAME, RabbitMqConfig.JOB_QUEUE_NAME, messageToSend);
//rabbitTemplate.convertAndSend(RabbitMqConfig.JOB_QUEUE_NAME, messageToSend);
}
}
Consumer :JobQueueConsumer
enter code here
#Component
public class JobQueueConsumer implements MessageListener {
#Autowired
SagoAlgo sagoAlgo;
#Autowired
CCDMappingService ccdMappingService;
#RabbitListener(queues = { RabbitMqConfig.JOB_QUEUE_NAME })
public void receiveMessage(Message message) throws SQLException {
System.out.println("Received Message: " + new String(message.getBody()));
Integer jobId = Integer.parseInt(new String(message.getBody()));
System.out.println(jobId);
CCDMappingVO ccdVo =ccdMappingService.fetchCCDWithCategoriesById(jobId);
sagoAlgo.execAlgo(ccdVo); //my Algo to be executed
}
//when I implement MessageListener default method to be executed
public void onMessage(Message message) {
// System.out.println("Received Message: " + new
// String(message.getBody()));
}
}

Resources