Spring Bookt Kafka ABSwitchCluster - spring-boot

I couldn't find any example to swtich between kafka cluster .
Anyone has implmeneted this class ABSwitchCluster from Spring Kafka.
https://docs.spring.io/spring-kafka/reference/html/
I tried with below code, but its not switching cluster.
#RestController
public class ApacheKafkaWebController {
#Autowired
ConsumerKakfaConfiguration configuration;
#Autowired
private KafkaListenerEndpointRegistry registry;
#Autowired
private ABSwitchCluster switcher;
#GetMapping(value = "/switch")
public String producer() {
registry.stop();
switcher.secondary();
registry.start();
return "switched!";
}
}
and swticher bean here:
#Bean
public ABSwitchCluster switcher() {
return new ABSwitchCluster("127.0.0.1:9095", "127.0.0.1:9096");
}
Could you please tell me am I missing anything here?, still its running in 9095 port.

See this answer and this test.
Basically, you switch the cluster and reset the connections by stopping and starting listener containers and resetting the producer factory.

Related

KafkaListener Not triggered in Spring Boot test

I have a spring boot test to check if a kafka consumer listens for a message in specific topic. The kafka listener is triggered when using #SpringBootTest. But I just don't want to load all the classes and I only supplied the listener class like this #SpringBootTest(classes={KafkaConsumerTest.class}).
When only loading the consumer class, the listener has stopped to trigger. Is there something I am missing?
Here is the KafkaTestConsumer class
#Service
public class KafkaTestConsumer {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaTestConsumer.class);
private CountDownLatch latch = new CountDownLatch(1);
private String payload;
#KafkaListener(topics = {"topic"})
public void receive(ConsumerRecord<?, ?> consumerRecord) {
payload = consumerRecord.toString();
latch.countDown();
}
public CountDownLatch getLatch() {
return latch;
}
public void resetLatch() {
latch = new CountDownLatch(1);
}
public String getPayload() {
return payload;
}
}
It would be great to see what is your KafkaConsumerTest, but perhaps you just override the whole auto-configuration with a plain #Configuration.
See more in docs: https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.testing.spring-boot-applications.detecting-configuration
If you want to customize the primary configuration, you can use a nested #TestConfiguration class. Unlike a nested #Configuration class, which would be used instead of your application’s primary configuration, a nested #TestConfiguration class is used in addition to your application’s primary configuration.

Spring-Kafka: How to pass the kafka topic from the application.yml

I have a small project in Spring Kafka
I wish I could pass my kafka topic from application.yml and avoid a hard-coding problem. For the moment I have this situation:
public class KafkaConsumer {
#Autowired
private UserRepository userRepository;
#KafkaListener(topics = "myTopic")
public void listen(#Validate UserDto userDto) {
User user= new User(userDto);
userRepository.save(userDto.getAge(), user);
}
}
at this moment I have the static kafka topic (being a string) is it possible to put it in the application.yml and have it read from there? Thanks everyone for any help
You can post your topic in the application.yml :
kafka:
template:
default-topic: "MyTopic"
In your KafkaListerner :
#KafkaListener(topics = "#{'${spring.kafka.template.default-topic}'}")
So you should solve the problem of the "Attribute Value" failing to take a dynamic value
This worked for me.
You can use below entry in application.yml file
Usually we use #Value as below to pick data from properties/yaml files for a specified key in you Java class as below.
#Value("${kafka.topic.name}")
private String TOPIC_NAME;
Since Kafka Listener expects constant here, you can use directly as below
public class KafkaConsumer {
#Autowired
private UserRepository userRepository;
#KafkaListener(topics = "${kafka.topic.name}")
public void listen(#Validate UserDto userDto) {
User user= new User(userDto);
userRepository.save(userDto.getAge(), user);
}
}

How to load a compacted topic in memory before starting the context

I'm using a compacted topic in kafka which I load into a HashMap at the application startup.
Then I'm listening to a normal topic for messages, and processing them using the HashMap constructed from the compacted topic.
How can I make sure the compacted topic is fully read and the HashMap fully initialized before starting to listen to the other topics ?
(Same for RestControllers)
Implement SmartLifecycle and load the map in start(). Make sure the phase is earlier than any other object that needs the map.
This is an old question, I know, but I wanted to provide a more complete code sample of a solution that I ended up with when I struggled with this very problem myself.
The idea is that, like Gary has mentioned in the comments of his own answer, a listener isn't the correct thing to use during initialization - that comes afterwards. An alternative to Garry's SmartLifecycle idea, however, is InitializingBean, which I find less complicated to implement, since it's only one method: afterPropertiesSet():
#Slf4j
#Configuration
#RequiredArgsConstructor
public class MyCacheInitializer implements InitializingBean {
private final ApplicationProperties applicationProperties; // A custom ConfigurationProperties-class
private final KafkaProperties kafkaProperties;
private final ConsumerFactory<String, Bytes> consumerFactory;
private final MyKafkaMessageProcessor messageProcessor;
#Override
public void afterPropertiesSet() {
String topicName = applicationProperties.getKafka().getConsumer().get("my-consumer").getTopic();
Duration pollTimeout = kafkaProperties.getListener().getPollTimeout();
try (Consumer<String, Bytes> consumer = consumerFactory.createConsumer()) {
consumer.subscribe(List.of(topicName));
log.info("Starting to cache the contents of {}", topicName);
ConsumerRecords<String, Bytes> records;
do {
records = consumer.poll(pollTimeout);
records.forEach(messageProcessor::process);
} while (!records.isEmpty());
}
log.info("Completed caching {}", topicName);
}
}
For brevity's sake I'm using Lombok's #Slf4j and #RequiredArgsConstructor annotations, but those can be easily replaced. The ApplicationProperties class is just my way of getting the topic name I'm interested in. It can be replaced with something else, but my implementation uses Lombok's #Data annotation, and looks something like this:
#Data
#Configuration
#ConfigurationProperties(prefix = "app")
public class ApplicationProperties {
private Kafka kafka = new Kafka();
#Data
public static class Kafka {
private Map<String, KafkaConsumer> consumer = new HashMap<>();
}
#Data
public static class KafkaConsumer {
private String topic;
}
}

Dropwizard Counter Not Retaining value in Spring Boot App

I am trying to register the # of spring boot apps I've in my pvt cloud environment. Logic is to use Counter metric to increment during startUp and decrement during shut down. All the different deployments will publish to the same metricPreFix(--assumption). Following is the graph I get in Graphite:
#application.properties
spring.metrics.export.delay-millis=100
Why do I see the value to come down to 0 even when the app is running? I have tried with 2 different implementations with same result. Can someone please point out the gap in my understanding? PFB the code
#Component
public class AppStartupBean implements CommandLineRunner {
private static final String appMetricName = "MyApp.currentCount.GraphOne";
private static final String metricName = "MyApp.currentCount.GraphTwo";
#Autowired
DropwizardMetricServices dwMetricService;
#Autowired
private MetricRegistry registry;
#Override
public void run(String... arg0) throws Exception {
dwMetricService.increment(appMetricName);
Counter counter = registry.counter(metricName);
counter.inc();
}
}
The configuration for DropwizardMetricServices was wrong. I was using
#Bean
public DropwizardMetricServices dwMetricService(MetricRegistry registry) {
return new DropwizardMetricServices(registry);
}
Instead we should just #Autowire DropwizardMetricServices as needed. PFB
When Dropwizard metrics are in use, the default CounterService and
GaugeService are replaced with a DropwizardMetricServices, which is a
wrapper around the MetricRegistry (so you can #Autowired one of those
services and use it as normal).

Spring Batch Tomcat memory leak

I use
Tomcat 8.0.26
Spring Boot 1.2.6.RELEASE
Spring 4.2.1.RELEASE
Spring Batch 3.0.5.RELEASE
In my application I have a following Spring Batch config:
#Configuration
#EnableBatchProcessing
public class ReportJobConfig {
public static final String REPORT_JOB_NAME = "reportJob";
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private ReportService reportService;
#Bean(name = REPORT_JOB_NAME)
public Job reportJob() {
//#formatter:off
return jobBuilderFactory
.get(REPORT_JOB_NAME)
.flow(createRequestStep())
.on("*").to(retriveInfoStep())
.on("*").to(notifyAdminStep())
.end().build();
//#formatter:on
}
#Bean
public Step createRequestStep() {
return stepBuilderFactory.get("createRequest").tasklet(new CreateRequestTasklet(reportService)).build();
}
#Bean
public Step retrivePHIStep() {
return stepBuilderFactory.get("retriveInfo").tasklet(new RetriveInfoTasklet(reportService)).build();
}
#Bean
public Step notifyAdminStep() {
return stepBuilderFactory.get("notifyAdmin").tasklet(new NotifyAdminTasklet()).build();
}
}
This is how I run the job:
#Service
public class ReportJobServiceImpl implements ReportJobService {
final static Logger logger = LoggerFactory.getLogger(ReportJobServiceImpl.class);
#Autowired
#Qualifier(ReportJobConfig.REPORT_JOB_NAME)
private Job reportJob;
#Autowired
private JobLauncher jobLauncher;
#Override
public void runReportJob(String messageContent) throws JobExecutionAlreadyRunningException, JobRestartException,
JobInstanceAlreadyCompleteException, JobParametersInvalidException {
Map<String, JobParameter> parameters = new HashMap<>();
JobParameter reportIdParameter = new JobParameter(messageContent);
parameters.put(REPORT_ID, reportIdParameter);
jobLauncher.run(reportJob, new JobParameters(parameters));
}
}
Batch properties:
batch.jdbc.driver=com.mysql.jdbc.Driver
batch.jdbc.url=jdbc:mysql://localhost/database
batch.jdbc.user=user
batch.jdbc.password=password
batch.jdbc.testWhileIdle=true
batch.jdbc.validationQuery=SELECT 1
batch.drop.script=classpath:/org/springframework/batch/core/schema-drop-mysql.sql
batch.schema.script=classpath:/org/springframework/batch/core/schema-mysql.sql
batch.business.schema.script=classpath:/business-schema-mysql.sql
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.MySQLMaxValueIncrementer
batch.database.incrementer.parent=columnIncrementerParent
batch.lob.handler.class=org.springframework.jdbc.support.lob.DefaultLobHandler
batch.grid.size=50
batch.jdbc.pool.size=6
batch.verify.cursor.position=true
batch.isolationlevel=ISOLATION_SERIALIZABLE
batch.table.prefix=BATCH_
I deploy this application to Tomcat 8, perform some jobs and then undeploy application via Tomcat Web Application Manager.
With Java VisualVM tool I compared memory snapshots before and after and see that there are a lot of Spring Batch(org.springframework.batch.*) related objects still exist in memory:
Also, I run 1000 reportJob and got a huge memory consumption on my machine.. I have no idea what can be wrong right now..
What could be causing this issue ?
UPDATED
I have consumed ~1000 messages from AWS SQS queue. My JMS listener configured to consume 1 message at a time. During the execution I had a following heap histogram:
I really don't understand why do I need for example to have in memory 7932 instances of StepExecution.. or 5285 of JobExecution objects. Where is my mistake ?

Resources