spring-cloud-stream - Kafka producer prefix unique per node - spring-boot

I want to send something to Kafka topic in producer-only (not in read-write process) transaction using output-channel.
I read documentation and another topic on StackOverflow (Spring cloud stream kafka transactions in producer side).
Problem is that i need to set unique transactionIdPrefix per node.
Any suggestion how to do it?

Here is one way...
#Component
class TxIdCustomizer implements EnvironmentAware {
#Override
public void setEnvironment(Environment environment) {
Properties properties = new Properties();
properties.setProperty("spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix",
UUID.randomUUID().toString());
((StandardEnvironment) environment).getPropertySources()
.addLast(new PropertiesPropertySource("txId", properties));
}
}

Related

How many kafka topics to create for an api?

I'm using kafka for my api. I'm using spring with microservice I'll post my kafka code below:
Command:
private static final Logger logger =
LoggerFactory.getLogger(UserCommandServiceImpl.class);
#Autowired
private KafkaTemplate<String, Object> kafkaTemplate;
public void sendMessage(User objeto)
{
logger.info(String.format("Message sent -> %s", objeto.toString()));
this.kafkaTemplate.send("quickstart-events", objeto);
}
Query:
private final Logger logger = LoggerFactory.getLogger(UserQueryServiceImpl.class);
#Autowired
private MongoTemplate mongoTemplate;
#KafkaListener(topics = "quickstart-events" , groupId = "group-id")
public void consume(String message)
{
logger.info(String.format("Message recieved -> %s", message));
mongoTemplate.insert(message, "user");
}
I installed kafka from that site:
I'm using CQRS Pattern so each query is a microservice and command another.
My question is simple for each microservice I create a kafka topic?
Thanks!
Imagine a Kafka Topic as a database table, use one topic per kind of data.
If you are wondering how you can scale your application, you may ask how many partitions your topic should have. A topic is a set of partitions that will handle all data.
Take a look at the image below, a topic will receive values from more than one producer and it will have just one kind of message. A message can be stored in any partition and this is defined by the message key.

Can selective disable on Queue consumption in #JmsListener SpringBoot possible?

I'm using SpringBoot along with #JmsListener to retrieve IBM MQ messages from multiple queues within the same QManager. So far I can get messages without any issues. But there could be scenarios, where I had to stop consuming msgs from one of these queues temporarily. It doesn't have to be dynamic.
I'm not using any custom ConnectionFactory methods. When needed, I would like to make config changes in application.properties to disable that particular Queue consumption and restart the process. Is this possible? Can't find any specific info for this scenario. Would appreciate any suggestions. TIA.
#Component
public class MyJmsListener {
#JmsListener(destination = "{ibm.mq.queue.queue01}")
public void handleQueue01(String message) {
System.out.println("received: "+message);
}
#JmsListener(destination = "{ibm.mq.queue.queue02}")
public void handleQueue02(String message) {
System.out.println("received: "+message);
}
}
application.properties
ibm.mq.queue.queue01=IBM.QUEUE01
ibm.mq.queue.queue02=IBM.QUEUE02
If you give each #JmsListener an id property, you can start and stop them individually using the JmsListenerEndpointRegistry bean.
registry.getListenerContainer(id).stop();

Consumer restart when I reset Spring Boot app

I have a Kafka topic with data, called "topic01"
I want to create a consumer that every time I start my Spring Boot 2 application, start reading that topic from the beginning.
I have the following code, that if I add something new to the topic if it reaches me, but when starting the first time, it won't read me from the beginning of the topic.
#KafkaListener(topics = "topic01")
public void listenTopic01(ConsumerRecord<String, MiDTO> consumerRecord) throws Exception {
logger.info("KafkaHandler");
logger.info(consumerRecord.value().toString());
logger.info(consumerRecord.key().toString());
latch.countDown();
}
application.properties:
spring.kafka.consumer.group-id=XXXXX
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
What configuration should I add, so that this #KafkaListener reads the topic from the beginning, every time I restart my application.
Either use a unique (random) group-id each time, or have your listener class implement ConsumerSeekAware and add
#Override
public void onPartitionsAssigned(Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
consumer.seekToBeginning(partitions);
}
or
#KafkaListener(topics = "topic01",
groupId = "#{T(java.util.UUID).randomUUID().toString()}")

Kafka consumer picking up topics dynamically

I have a Kafka consumer configured in Spring Boot. Here's the config class:
#EnableKafka
#Configuration
#PropertySource({"classpath:kafka.properties"})
public class KafkaConsumerConfig {
#Autowired
private Environment env;
#Bean
public ConsumerFactory<String, GenericData.Record> consumerFactory() {
dataRiverProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, env.getProperty("bootstrap.servers"));
dataRiverProps.put(ConsumerConfig.GROUP_ID_CONFIG, env.getProperty("group.id"));
dataRiverProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, env.getProperty("enable.auto.commit"));
dataRiverProps.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, env.getProperty("auto.commit.interval.ms"));
dataRiverProps.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, env.getProperty("session.timeout.ms"));
dataRiverProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, env.getProperty("auto.offset.reset"));
dataRiverProps.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG, env.getProperty("schema.registry.url"));
dataRiverProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class.getName());
dataRiverProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class.getName());
return new DefaultKafkaConsumerFactory<>(dataRiverProps);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, GenericData.Record> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, GenericData.Record> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
And here's the consumer:
#Component
public class KafkaConsumer {
#Autowired
private MessageProcessor messageProcessor;
#KafkaListener(topics = "#{'${kafka.topics}'.split(',')}", containerFactory = "kafkaListenerContainerFactory")
public void consumeAvro(GenericData.Record message) {
messageProcessor.process();
}
}
Please note that I am using topics = "#{'${kafka.topics}'.split(',')}" to pick up the topics from a properties file.
And this is what my kafka.properties file looks like:
kafka.topics=pwdChange,pwdCreation
bootstrap.servers=aaa.bbb.com:37900
group.id=pwdManagement
enable.auto.commit=true
auto.commit.interval.ms=1000
session.timeout.ms=30000
schema.registry.url=http://aaa.bbb.com:37800
Now if I am to add a new topic to the subscription, say pwdExpire, and modify the prop files as follows:
kafka.topics=pwdChange,pwdCreation,pwdExpire
Is there a way for my consumer to start subscribe to this new topic without restarting the server?
I have found this post Spring Kafka - Subscribe new topics during runtime, but the documentation has this to say about metadata.max.age.ms:
The period of time in milliseconds after which we force a refresh of
metadata even if we haven't seen any partition leadership changes to
proactively discover any new brokers or partitions.
Sounds to me it won't work. Thanks for your help!
No; the only way to do that is to use a topic pattern; as new topics are added (that match the pattern), the broker will add them to the subscription, after 5 minutes, by default.
You can, however, add new listener container(s) for the new topic(s) at runtime.
Another option would be to load the #KafkaListener bean in a child application context and re-create the context each time the topic(s) change.
EDIT
See the javadocs for KafkaConsumer.subscribe(Pattern pattern)...
/**
* Subscribe to all topics matching specified pattern to get dynamically assigned partitions.
* The pattern matching will be done periodically against topics existing at the time of check.
* <p>
...

How to set a Message Handler programmatically in Spring Cloud AWS SQS?

maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}

Resources