Kafka producer is not recreated with #RefreshScope - spring

I have a config server and several clients that work with Kafka. I would like my clients to work non-stop when I update config properties on a config server.
Here is my configuration setup:
#Component
#RefreshScope
#ConfigurationProperties(prefix = "params")
public class ConfigParams {
private KafkaParams kafkaParams = new KafkaParams(); // some custom params (doesn't matter)
// ...
}
#Configuration
public class KafkaConfig {
#Autowired
private ConfigParams configParams;
// ...
#Bean
#RefreshScope
MyKafkaBean myKafkaBean() {
return new MyKafkaBean(configParams.getKafkaParams()); // here I create my own kafka bean with topics, producers and consumers
}
// ...
}
Problem description:
If I change bootstrap.servers property to an incorrect and the kafka producer was invoked, there will be a WARN with a message that producer cannot connect to a kafka broker (that is correct). When I fix this property the connection is back and everything works fine, but I still have WARNs about no connection to the broker for "producer-1" ("producer-2" was created for a new connection).
P.S. Consumers work fine without any issues.
P.S.P.S. I've already tried to delete producer manually using DefaultKafkaConsumerFactory#reset or DefaultKafkaConsumerFactory#destroy, but it didn't help.
P.S.P.S.P.S. I assume the problem is that Kafka saves cache of producers. But I don't know how to solve it.
I'll be grateful for your help.

I had a similar issue. I used the RefreshRemoteApplicationEvent listener to manually close the producer before the refresh mechanism is processed.
For example:
#Slf4j
public class MyRefreshEventListener implements ApplicationListener<RefreshRemoteApplicationEvent> {
private final MyKafkaBean myKafkaBean;
public MyRefreshEventListener(#Autowired MyKafkaBean myKafkaBean) {
this.myKafkaBean = myKafkaBean;
}
#Override
public void onApplicationEvent(RefreshRemoteApplicationEvent event) {
log.info("Refresh myKafkaBean:");
if (myKafkaBean != null) {
closeProducer(myKafkaBean);
}
}
private void closeProducer(Producer<String, String> producer) {
if (producer != null) {
producer.flush();
producer.close();
}
}
}

Related

How to create instance specific message queues in springboot rest api

I have a number of microservices, each running in its own container in a load balanced environment. I have a need for each instance of these microservices to create a rabbitmq queue when it starts up and delete it when it stops. I have currently defined the following property in my application properties file:
config_queue: config_${PID}
My message queue listener looks like this:
public class ConfigListener {
Logger logger = LoggerFactory.getLogger(ConfigListener.class);
// https://www.programcreek.com/java-api-examples/index.php?api=org.springframework.amqp.rabbit.annotation.RabbitListener
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "${config_queue}",
autoDelete = "true"),
exchange = #Exchange(value = AppConstants.TOPIC_CONFIGURATION,
type= ExchangeTypes.FANOUT)
))
public void configChanged(String message){
... application logic
}
}
All this works great when I run the microservice. A queue with prefix config and process id gets created and is auto deleted when I stop the service.
However, when I run this service and others in their individual docker containers, all services have the same PID and that is 1.
Does anybody have any idea how I can create specify a queue that is unique to that instance.
Thanks in advance for your help.
Use an AnonymousQueue instead:
#SpringBootApplication
public class So72030217Application {
public static void main(String[] args) {
SpringApplication.run(So72030217Application.class, args);
}
#RabbitListener(queues = "#{configQueue.name}")
public void listen(String in) {
System.out.println(in);
}
}
#Configuration
class Config {
#Bean
FanoutExchange fanout() {
return new FanoutExchange("config");
}
#Bean
Queue configQueue() {
return new AnonymousQueue(new Base64UrlNamingStrategy("config_"));
}
#Bean
Binding binding() {
return BindingBuilder.bind(configQueue()).to(fanout());
}
}
AnonymousQueues are auto-delete and use a Base64 encoded UUID in the name.

Transactional kafka listener retry

I'm trying to create a Spring Kafka #KafkaListener which is both transactional (kafa and database) and uses retry. I am using Spring Boot. The documentation for error handlers says that
When transactions are being used, no error handlers are configured, by default, so that the exception will roll back the transaction. Error handling for transactional containers are handled by the AfterRollbackProcessor. If you provide a custom error handler when using transactions, it must throw an exception if you want the transaction rolled back (source).
However, when I configure my listener with a #Transactional("kafkaTransactionManager) annotation, even though I can clearly see that the template rolls back produced messages when an exception is raised, the container actually uses a non-null commonErrorHandler rather than an AfterRollbackProcessor. This is the case even when I explicitly configure the commonErrorHandler to null in the container factory. I do not see any evidence that my configured AfterRollbackProcessor is ever invoked, even after the commonErrorHandler exhausts its retry policy.
I'm uncertain how Spring Kafka's error handling works in general at this point, and am looking for clarification. The questions I want to answer are:
What is the recommended way to configure transactional kafka listeners with Spring-Kafka 2.8.0? Have I done it correctly?
Should the common error handler indeed be used rather than the after rollback processor? Does it rollback the current transaction before trying to process the message again according to the retry policy?
In general, when I have a transactional kafka listener, is there ever more than one layer of error handling I should be aware of? E.g. if my common error handler re-throws exceptions of kind T, will another handler catch that and potentially start retry of its own?
Thanks!
My code:
#Configuration
#EnableScheduling
#EnableKafka
public class KafkaConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConfiguration.class);
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Integer, Object>();
factory.setConsumerFactory(consumerFactory);
var afterRollbackProcessor =
new DefaultAfterRollbackProcessor<Object, Object>(
(record, e) -> LOGGER.info("After rollback processor triggered! {}", e.getMessage()),
new FixedBackOff(1_000, 1));
// Configures different error handling for different listeners.
factory.setContainerCustomizer(
container -> {
var groupId = container.getContainerProperties().getGroupId();
if (groupId.equals("InputProcessorHigh") || groupId.equals("InputProcessorLow")) {
container.setAfterRollbackProcessor(afterRollbackProcessor);
// If I set commonErrorHandler to null, it is defaulted instead.
}
});
return factory;
}
}
#Component
public class InputProcessor {
private static final Logger LOGGER = LoggerFactory.getLogger(InputProcessor.class);
private final KafkaTemplate<Integer, Object> template;
private final AuditLogRepository repository;
#Autowired
public InputProcessor(KafkaTemplate<Integer, Object> template, AuditLogRepository repository) {
this.template = template;
this.repository = repository;
}
#KafkaListener(id = "InputProcessorHigh", topics = "input-high", concurrency = "3")
#Transactional("kafkaTransactionManager")
public void inputHighProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
#KafkaListener(id = "InputProcessorLow", topics = "input-low", concurrency = "1")
#Transactional("kafkaTransactionManager")
public void inputLowProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
public void processInputs(ConsumerRecord<Integer, Input> input) {
var key = input.key();
var message = input.value().getMessage();
var output = new Output().setMessage(message);
LOGGER.info("Processing {}", message);
template.send("output-left", key, output);
repository.createIfNotExists(message); // idempotent insert
template.send("output-right", key, output);
if (message.contains("ERROR")) {
throw new RuntimeException("Simulated processing error!");
}
}
}
My application.yaml (minus my bootstrap-servers and security config):
spring:
kafka:
consumer:
auto-offset-reset: 'earliest'
key-deserializer: 'org.apache.kafka.common.serialization.IntegerDeserializer'
value-deserializer: 'org.springframework.kafka.support.serializer.JsonDeserializer'
isolation-level: 'read_committed'
properties:
spring.json.trusted.packages: 'java.util,java.lang,com.github.tomboyo.silverbroccoli.*'
producer:
transaction-id-prefix: 'tx-'
key-serializer: 'org.apache.kafka.common.serialization.IntegerSerializer'
value-serializer: 'org.springframework.kafka.support.serializer.JsonSerializer'
[EDIT] (solution code)
I was able to figure it out with Gary's help. As they say, we need to set the kafka transaction manager on the container so that the container can start transactions. The transactions documentation doesn't cover how to do this, and there are a few ways. First, we can get the mutable container properties object from the factory and set the transaction manager on that:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
var factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setTransactionManager(...);
return factory;
}
If we are in Spring Boot, we can re-use some of the auto configuration to set sensible defaults on our factory before we customize it. We can see that the KafkaAutoConfiguration module imports KafkaAnnotationDrivenConfiguration, which produces a ConcurrentKafkaListenerContainerFactoryConfigurer bean. This appears to be responsible for all the default configuration in a Spring-Boot application. So, we can inject that bean and use it to initialize our factory before adding customizations:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer bootConfigurer,
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
// Apply default spring-boot configuration.
bootConfigurer.configure(factory, consumerFactory);
factory.setContainerCustomizer(
container -> {
... // do whatever
});
return factory;
}
Once that's done, the container uses the AfterRollbackProcessor for error handling, as expected. As long as I don't explicitly configure a common error handler, this appears to be the only layer of exception handling.
The AfterRollbackProcessor is only used when the container knows about the transaction; you must provide a KafkaTransactionManager to the container so that the kafka transaction is started by the container, and the offsets sent to the transaction. Using #Transactional is not the correct way to start a Kafka Transaction.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#transactions

rabbitmq binding not work with spring-boot

with spring boot 1.5.9 RELEASE, code as below
#Configuration
#EnableRabbit
public class RabbitmqConfig {
#Autowired
ConnectionFactory connectionFactory;
#Bean//with or without this bean, neither works
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(connectionFactory);
}
#Bean
public Queue bbbQueue() {
return new Queue("bbb");
}
#Bean
public TopicExchange requestExchange() {
return new TopicExchange("request");
}
#Bean
public Binding bbbBinding() {
return BindingBuilder.bind(bbbQueue())
.to(requestExchange())
.with("*");
}
}
After the jar stars, there is no error message and there is no topic exchange showing in RabbitMQ managementUI(15672) exchanges page.
However, with python code, topic exchange shows and the binding can be seen on exchange detaile page. python code as below
connection = pika.BlockingConnection(pika.ConnectionParameters(host='10.189.134.47'))
channel = connection.channel()
channel.exchange_declare(exchange='request', exchange_type='topic', durable=True)
result = channel.queue_declare(queue='aaa', durable=True)
queue_name = result.method.queue
channel.queue_bind(exchange='aaa', routing_key='*',
queue=queue_name)
print(' [*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" [x] %r" % body)
channel.basic_consume(callback, queue=queue_name, no_ack=True)
channel.start_consuming()
I just copied your code and it works fine.
NOTE The queue/binding won't be declared until a connection is opened, such as by a listener container that reads from the queue (or sending a message with a RabbitTemplate).
#RabbitListener(queues = "bbb")
public void listen(String in) {
System.out.println(in);
}
The container must have autoStartup=true (default).

Kafka consumer picking up topics dynamically

I have a Kafka consumer configured in Spring Boot. Here's the config class:
#EnableKafka
#Configuration
#PropertySource({"classpath:kafka.properties"})
public class KafkaConsumerConfig {
#Autowired
private Environment env;
#Bean
public ConsumerFactory<String, GenericData.Record> consumerFactory() {
dataRiverProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, env.getProperty("bootstrap.servers"));
dataRiverProps.put(ConsumerConfig.GROUP_ID_CONFIG, env.getProperty("group.id"));
dataRiverProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, env.getProperty("enable.auto.commit"));
dataRiverProps.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, env.getProperty("auto.commit.interval.ms"));
dataRiverProps.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, env.getProperty("session.timeout.ms"));
dataRiverProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, env.getProperty("auto.offset.reset"));
dataRiverProps.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG, env.getProperty("schema.registry.url"));
dataRiverProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class.getName());
dataRiverProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class.getName());
return new DefaultKafkaConsumerFactory<>(dataRiverProps);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, GenericData.Record> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, GenericData.Record> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
And here's the consumer:
#Component
public class KafkaConsumer {
#Autowired
private MessageProcessor messageProcessor;
#KafkaListener(topics = "#{'${kafka.topics}'.split(',')}", containerFactory = "kafkaListenerContainerFactory")
public void consumeAvro(GenericData.Record message) {
messageProcessor.process();
}
}
Please note that I am using topics = "#{'${kafka.topics}'.split(',')}" to pick up the topics from a properties file.
And this is what my kafka.properties file looks like:
kafka.topics=pwdChange,pwdCreation
bootstrap.servers=aaa.bbb.com:37900
group.id=pwdManagement
enable.auto.commit=true
auto.commit.interval.ms=1000
session.timeout.ms=30000
schema.registry.url=http://aaa.bbb.com:37800
Now if I am to add a new topic to the subscription, say pwdExpire, and modify the prop files as follows:
kafka.topics=pwdChange,pwdCreation,pwdExpire
Is there a way for my consumer to start subscribe to this new topic without restarting the server?
I have found this post Spring Kafka - Subscribe new topics during runtime, but the documentation has this to say about metadata.max.age.ms:
The period of time in milliseconds after which we force a refresh of
metadata even if we haven't seen any partition leadership changes to
proactively discover any new brokers or partitions.
Sounds to me it won't work. Thanks for your help!
No; the only way to do that is to use a topic pattern; as new topics are added (that match the pattern), the broker will add them to the subscription, after 5 minutes, by default.
You can, however, add new listener container(s) for the new topic(s) at runtime.
Another option would be to load the #KafkaListener bean in a child application context and re-create the context each time the topic(s) change.
EDIT
See the javadocs for KafkaConsumer.subscribe(Pattern pattern)...
/**
* Subscribe to all topics matching specified pattern to get dynamically assigned partitions.
* The pattern matching will be done periodically against topics existing at the time of check.
* <p>
...

Send and receive files from FTP in Spring Boot

I'm new to Spring Framework and, indeed, I'm learning and using Spring Boot. Recently, in the app I'm developing, I made Quartz Scheduler work, and now I want to make Spring Integration work there: FTP connection to a server to write and read files from.
What I want is really simple (as I've been able to do so in a previous java application). I've got two Quartz Jobs scheduled to fired in different times daily: one of them reads a file from a FTP server and another one writes a file to a FTP server.
I'll detail what I've developed so far.
#SpringBootApplication
#ImportResource("classpath:ws-config.xml")
#EnableIntegration
#EnableScheduling
public class MyApp extends SpringBootServletInitializer {
#Autowired
private Configuration configuration;
//...
#Bean
public DefaultFtpsSessionFactory myFtpsSessionFactory(){
DefaultFtpsSessionFactory sess = new DefaultFtpsSessionFactory();
Ftp ftp = configuration.getFtp();
sess.setHost(ftp.getServer());
sess.setPort(ftp.getPort());
sess.setUsername(ftp.getUsername());
sess.setPassword(ftp.getPassword());
return sess;
}
}
The following class I've named it as a FtpGateway, as follows:
#Component
public class FtpGateway {
#Autowired
private DefaultFtpsSessionFactory sess;
public void sendFile(){
// todo
}
public void readFile(){
// todo
}
}
I'm reading this documentation to learn to do so. Spring Integration's FTP seems to be event driven, so I don't know how can I execute either of the sendFile() and readFile() from by Jobs when the trigger is fired at an exact time.
The documentation tells me something about using Inbound Channel Adapter (to read files from a FTP?), Outbound Channel Adapter (to write files to a FTP?) and Outbound Gateway (to do what?):
Spring Integration supports sending and receiving files over FTP/FTPS by providing three client side endpoints: Inbound Channel Adapter, Outbound Channel Adapter, and Outbound Gateway. It also provides convenient namespace-based configuration options for defining these client components.
So, I haven't got it clear as how to follow.
Please, could anybody give me a hint?
Thank you!
EDIT:
Thank you #M. Deinum. First, I'll try a simple task: read a file from the FTP, the poller will run every 5 seconds. This is what I've added:
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(myFtpsSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setPreserveTimestamp(true);
fileSynchronizer.setRemoteDirectory("/Entrada");
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.csv"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "ftpChannel", poller = #Poller(fixedDelay = "5000"))
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source = new FtpInboundFileSynchronizingMessageSource(inbound);
source.setLocalDirectory(new File(configuracion.getDirFicherosDescargados()));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
Object payload = message.getPayload();
if(payload instanceof File){
File f = (File) payload;
System.out.println(f.getName());
}else{
System.out.println(message.getPayload());
}
}
};
}
Then, when the app is running, I put a new csv file intro "Entrada" remote folder, but the handler() method isn't run after 5 seconds... I'm doing something wrong?
Please add #Scheduled(fixedDelay = 5000) over your poller method.
You should use SPRING BATCH with tasklet. It is far easier to configure bean, crone time, input source with existing interfaces provided by Spring.
https://www.baeldung.com/introduction-to-spring-batch
Above example is annotation and xml based both, you can use either.
Other benefit Take use of listeners and parallel steps. This framework can be used in Reader - Processor - Writer manner as well.

Resources