I have a small project in Spring Kafka
I wish I could pass my kafka topic from application.yml and avoid a hard-coding problem. For the moment I have this situation:
public class KafkaConsumer {
#Autowired
private UserRepository userRepository;
#KafkaListener(topics = "myTopic")
public void listen(#Validate UserDto userDto) {
User user= new User(userDto);
userRepository.save(userDto.getAge(), user);
}
}
at this moment I have the static kafka topic (being a string) is it possible to put it in the application.yml and have it read from there? Thanks everyone for any help
You can post your topic in the application.yml :
kafka:
template:
default-topic: "MyTopic"
In your KafkaListerner :
#KafkaListener(topics = "#{'${spring.kafka.template.default-topic}'}")
So you should solve the problem of the "Attribute Value" failing to take a dynamic value
This worked for me.
You can use below entry in application.yml file
Usually we use #Value as below to pick data from properties/yaml files for a specified key in you Java class as below.
#Value("${kafka.topic.name}")
private String TOPIC_NAME;
Since Kafka Listener expects constant here, you can use directly as below
public class KafkaConsumer {
#Autowired
private UserRepository userRepository;
#KafkaListener(topics = "${kafka.topic.name}")
public void listen(#Validate UserDto userDto) {
User user= new User(userDto);
userRepository.save(userDto.getAge(), user);
}
}
Related
I have a spring boot test to check if a kafka consumer listens for a message in specific topic. The kafka listener is triggered when using #SpringBootTest. But I just don't want to load all the classes and I only supplied the listener class like this #SpringBootTest(classes={KafkaConsumerTest.class}).
When only loading the consumer class, the listener has stopped to trigger. Is there something I am missing?
Here is the KafkaTestConsumer class
#Service
public class KafkaTestConsumer {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaTestConsumer.class);
private CountDownLatch latch = new CountDownLatch(1);
private String payload;
#KafkaListener(topics = {"topic"})
public void receive(ConsumerRecord<?, ?> consumerRecord) {
payload = consumerRecord.toString();
latch.countDown();
}
public CountDownLatch getLatch() {
return latch;
}
public void resetLatch() {
latch = new CountDownLatch(1);
}
public String getPayload() {
return payload;
}
}
It would be great to see what is your KafkaConsumerTest, but perhaps you just override the whole auto-configuration with a plain #Configuration.
See more in docs: https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.testing.spring-boot-applications.detecting-configuration
If you want to customize the primary configuration, you can use a nested #TestConfiguration class. Unlike a nested #Configuration class, which would be used instead of your application’s primary configuration, a nested #TestConfiguration class is used in addition to your application’s primary configuration.
I couldn't find any example to swtich between kafka cluster .
Anyone has implmeneted this class ABSwitchCluster from Spring Kafka.
https://docs.spring.io/spring-kafka/reference/html/
I tried with below code, but its not switching cluster.
#RestController
public class ApacheKafkaWebController {
#Autowired
ConsumerKakfaConfiguration configuration;
#Autowired
private KafkaListenerEndpointRegistry registry;
#Autowired
private ABSwitchCluster switcher;
#GetMapping(value = "/switch")
public String producer() {
registry.stop();
switcher.secondary();
registry.start();
return "switched!";
}
}
and swticher bean here:
#Bean
public ABSwitchCluster switcher() {
return new ABSwitchCluster("127.0.0.1:9095", "127.0.0.1:9096");
}
Could you please tell me am I missing anything here?, still its running in 9095 port.
See this answer and this test.
Basically, you switch the cluster and reset the connections by stopping and starting listener containers and resetting the producer factory.
I'm using a compacted topic in kafka which I load into a HashMap at the application startup.
Then I'm listening to a normal topic for messages, and processing them using the HashMap constructed from the compacted topic.
How can I make sure the compacted topic is fully read and the HashMap fully initialized before starting to listen to the other topics ?
(Same for RestControllers)
Implement SmartLifecycle and load the map in start(). Make sure the phase is earlier than any other object that needs the map.
This is an old question, I know, but I wanted to provide a more complete code sample of a solution that I ended up with when I struggled with this very problem myself.
The idea is that, like Gary has mentioned in the comments of his own answer, a listener isn't the correct thing to use during initialization - that comes afterwards. An alternative to Garry's SmartLifecycle idea, however, is InitializingBean, which I find less complicated to implement, since it's only one method: afterPropertiesSet():
#Slf4j
#Configuration
#RequiredArgsConstructor
public class MyCacheInitializer implements InitializingBean {
private final ApplicationProperties applicationProperties; // A custom ConfigurationProperties-class
private final KafkaProperties kafkaProperties;
private final ConsumerFactory<String, Bytes> consumerFactory;
private final MyKafkaMessageProcessor messageProcessor;
#Override
public void afterPropertiesSet() {
String topicName = applicationProperties.getKafka().getConsumer().get("my-consumer").getTopic();
Duration pollTimeout = kafkaProperties.getListener().getPollTimeout();
try (Consumer<String, Bytes> consumer = consumerFactory.createConsumer()) {
consumer.subscribe(List.of(topicName));
log.info("Starting to cache the contents of {}", topicName);
ConsumerRecords<String, Bytes> records;
do {
records = consumer.poll(pollTimeout);
records.forEach(messageProcessor::process);
} while (!records.isEmpty());
}
log.info("Completed caching {}", topicName);
}
}
For brevity's sake I'm using Lombok's #Slf4j and #RequiredArgsConstructor annotations, but those can be easily replaced. The ApplicationProperties class is just my way of getting the topic name I'm interested in. It can be replaced with something else, but my implementation uses Lombok's #Data annotation, and looks something like this:
#Data
#Configuration
#ConfigurationProperties(prefix = "app")
public class ApplicationProperties {
private Kafka kafka = new Kafka();
#Data
public static class Kafka {
private Map<String, KafkaConsumer> consumer = new HashMap<>();
}
#Data
public static class KafkaConsumer {
private String topic;
}
}
I am experiencing problems using the #ConfigurationProperties feature.
Probably, I am missing something, since the mechanism seems very simple, but for me, it does not work.
I am using Spring Boot with the following main Application class
#SpringBootApplication
#EnableAspectJAutoProxy
#EnableConfigurationProperties(QueuesProperties.class)
#PropertySource("file:config/queues.properties")
#ImportResource("classpath:/spring-config.xml")
public class Application {
public static void main(String... args) {
ConfigurableApplicationContext ctx = SpringApplication.run(Application.class, args);
}
}
with QueuesProperties
#ConfigurationProperties(prefix = "wmq.in.queue")
public class QueuesProperties {
private static final Logger LOGGER = LoggerFactory.getLogger(QueuesProperties.class);
private String descr;
public String getDescr() {
return descr;
}
public void setDescr(String descr) {
this.descr = descr;
}
}
The properties file is very simple (I am trying to isolate the problem)
wmq.in.queue.descr = description
Then, I am trying to #Autowired the QueuesProperties in a #Component that I use in a spring-integration flow with a .
The QueuesProperties is correctly injected but the descr attribute is null.
#Autowired
private QueuesProperties queuesConfiguration;
while this
#Value("${wmq.in.queue.descr}")
private String descr;
is correctly evaluated.
I have made a lot of attempt with different configurations or code, but the result is the same. I get the QueuesProperties bean but it is not populated.
What am I missing?
Reading the question isn't very clear if the wmq.in.queue.descr = description properties is written in applciation.properties file. I said it because you say that the properties is correctly evaluated with #Value and not with
#Autowired
private QueuesProperties queuesConfiguration;
Even the #PropertySource("file:config/queues.properties") let me to think that probably the your wmq.in.queue.descr = description properties isn't written in applciation.properties but in file:config/queues.properties.
Summing
For use #ConfigurationProperties feature you have write the properties in application.properties and use #EnableConfigurationProperties(QueuesProperties.class) on #Component, #Configuration and so on annotated classes like below.
#Component
#EnableConfigurationProperties(QueuesProperties.class)
public class YourBean {
....
private final QueuesProperties queuesProperties;
public YourBean(QueuesProperties queuesProperties){
this.queuesProperties = queuesProperties;
}
.....
}
actually you can change the application.properties file name customizing spring boot properties evaluation but for your local app I discourage. I consider application.properties a good name for naming a place in which you put the configuration properties of your application
I hope that it can help you
I have a project with Spring Boot 1.3.3 [another stuff] and Redis configurated to manage sessions, i.e., #EnableRedisHttpSession. The application works well and stores the information on Redis regularly.
The problem that I'm facing is that, different from what documentation says, whether I define or not a server.session.timeout, the Redis always is using the default value for its annotation attribute (maxInactiveIntervalInSeconds) that is: 1800
Here, the documentation that I followed: http://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-session.html
I've also tried the approach defined by #rwinch here https://github.com/spring-projects/spring-session/issues/110 but also without success.
Updating ......
My configuration file as requested:
#First attempt (server.session.timeout) following the Spring documentation mentioned
server:
session:
timeout: 10
spring:
#session timeout under spring (as mentioned by M Deinum in comment - unfortunately doesnt work)
session:
timeout: 10
redis:
host: 192.168.99.101
port: 6379
Beside that, I've also tried to implement a SessionListener that was in charge of setting the timeout (something like this):
public class SessionListener implements HttpSessionListener {
#Value(value = "${server.session.timeout}")
private int timeout;
#Override
public void sessionCreated(HttpSessionEvent event) {
if(event!=null && event.getSession()!=null){
event.getSession().setMaxInactiveInterval(timeout);
}
}
...
It still didn't result in a correct scenario. I'm really racking my brain :|
Please guys, am I missing some point? Does anyone else have faced it?
Thanks in advance.
Another solution:
#EnableRedisHttpSession
public class HttpSessionConfig {
#Value("${server.session.timeout}")
private Integer maxInactiveIntervalInMinutes;
#Inject
private RedisOperationsSessionRepository sessionRepository;
#PostConstruct
private void afterPropertiesSet() {
sessionRepository.setDefaultMaxInactiveInterval(maxInactiveIntervalInMinutes * 60);
}
In this way you use the default configuration, and just add your timeout. So you maintain the default HttpSessionListener, and you don't need to use an ApplicationListener to set the time out, just one time, in the application lifecycle.
Well, just in case someone is facing the same situation, we have 2 ways to workaround:
I. Implement the following:
#EnableRedisHttpSession
public class Application {
//some other codes here
#Value("${spring.session.timeout}")
private Integer maxInactiveIntervalInSeconds;
#Bean
public RedisOperationsSessionRepository sessionRepository( RedisConnectionFactory factory) {
RedisOperationsSessionRepository sessionRepository = new RedisOperationsSessionRepository(factory);
sessionRepository.setDefaultMaxInactiveInterval(maxInactiveIntervalInSeconds);
return sessionRepository;
}
Unfortunately, I had to implement a listener in order to perform additional actions when a session expires. And, when you define a RedisOperationsSessionRepository, you don't have a HttpSessionListener anymore (instead of it, you have a SessionMessageListener, as described here: http://docs.spring.io/spring-session/docs/current/reference/html5/#api-redisoperationssessionrepository). Because of this question, the 2nd approach was required.
II. To overcome the problem:
#EnableRedisHttpSession
public class Application implements ApplicationListener{
#Value("${spring.session.timeout}")
private Integer maxInactiveIntervalInSeconds;
#Autowired
private RedisOperationsSessionRepository redisOperation;
#Override
public void onApplicationEvent(ApplicationEvent event) {
if (event instanceof ContextRefreshedEvent) {
redisOperation.setDefaultMaxInactiveInterval(maxInactiveIntervalInSeconds);
}
}
...
Assuming that none of them are the desirable out-of-box setup, at least they allow me to continue in my PoC.
#EnableRedisHttpSession(maxInactiveIntervalInSeconds = 60)
You can remove EnableRedisHttpSession annotation, instead, set the property:
spring.session.store-type=redis
Both spring.session.timeout and server.servlet.session.timeout will work. Please note spring.session.timeout will override server.servlet.session.timeout per my test.
Extend RedisHttpSessionConfiguration and do init in #PostConstruct method.
#Configuration
public class HttpSessionConfig extends RedisHttpSessionConfiguration {
#Value("${spring.session.timeout}")
private Integer sessionTimeoutInSec;
#Value("${spring.session.redis.namespace}")
private String sessionRedisNamespace;
#Bean
public LettuceConnectionFactory connectionFactory() {
return new LettuceConnectionFactory();
}
#PostConstruct
public void initConfig() throws Exception {
this.setMaxInactiveIntervalInSeconds(sessionTimeoutInSec);
this.setRedisNamespace(sessionRedisNamespace);
}
}