Adding custom header using Spring Kafka - spring

I am planning to use the Spring Kafka client to consume and produce messages from a kafka setup in a Spring Boot application. I see support for custom headers in Kafka 0.11 as detailed here. While it is available for native Kafka producers and consumers, I don't see support for adding/reading custom headers in Spring Kafka.
I am trying to implement a DLQ for messages based on a retry count that I was hoping to store in the message header without having to parse the payload.

I was looking for an answer when I stumbled upon this question. However I'm using the ProducerRecord<?, ?> class instead of Message<?>, so the header mapper does not seem to be relevant.
Here is my approach to add a custom header:
var record = new ProducerRecord<String, String>(topicName, "Hello World");
record.headers().add("foo", "bar".getBytes());
kafkaTemplate.send(record);
Now to read the headers (before consuming), I've added a custom interceptor.
import java.util.List;
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.consumer.ConsumerInterceptor;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
#Slf4j
public class MyConsumerInterceptor implements ConsumerInterceptor<Object, Object> {
#Override
public ConsumerRecords<Object, Object> onConsume(ConsumerRecords<Object, Object> records) {
Set<TopicPartition> partitions = records.partitions();
partitions.forEach(partition -> interceptRecordsFromPartition(records.records(partition)));
return records;
}
private void interceptRecordsFromPartition(List<ConsumerRecord<Object, Object>> records) {
records.forEach(record -> {
var myHeaders = new ArrayList<Header>();
record.headers().headers("MyHeader").forEach(myHeaders::add);
log.info("My Headers: {}", myHeaders);
// Do with header as you see fit
});
}
#Override public void onCommit(Map<TopicPartition, OffsetAndMetadata> offsets) {}
#Override public void close() {}
#Override public void configure(Map<String, ?> configs) {}
}
The final bit is to register this interceptor with the Kafka Consumer Container with the following (Spring Boot) configuration:
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
#Configuration
public class MessagingConfiguration {
#Bean
public ConsumerFactory<?, ?> kafkaConsumerFactory(KafkaProperties properties) {
Map<String, Object> consumerProperties = properties.buildConsumerProperties();
consumerProperties.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, MyConsumerInterceptor.class.getName());
return new DefaultKafkaConsumerFactory<>(consumerProperties);
}
}

Well, Spring Kafka provides headers support since version 2.0: https://docs.spring.io/spring-kafka/docs/2.1.2.RELEASE/reference/html/_reference.html#headers
You can have that KafkaHeaderMapper instance and use it to populated headers to the Message before sending it via KafkaTemplate.send(Message<?> message). Or you can use the plain KafkaTemplate.send(ProducerRecord<K, V> record).
When you receive records using KafkaMessageListenerContainer, the KafkaHeaderMapper can be supplied there via a MessagingMessageConverter injected to the RecordMessagingMessageListenerAdapter.
So, any custom headers can be transferred either way.

Related

Read messages from different AWS account using #SqsListener

I have an SQS standard queue that is provided by a third party vendor who has given access to our IAM user to read messages from there. So the AWS account ID for the queue is different than the one of my user.
I'm trying to use spring's #SqsListener annotation to consume these messages but I'm having trouble specifying the accountId that should be consumed from.
My bean configuration for the client looks like this:
#Bean
fun amazonSQSAsyncClient(): AmazonSQSAsync = AmazonSQSAsyncClientBuilder.standard()
.withCredentials(AWSStaticCredentialsProvider(BasicAWSCredentials(awsProperties.accessKey, awsProperties.secretKey)))
.withEndpointConfiguration(AwsClientBuilder.EndpointConfiguration(awsProperties.url, awsProperties.region))
.build()
I see no way of specifying the account Id in the credentials, and I also could not find any properties that can be used to define an accountId.
I tried setting the awsProperties.url shown above to something like https://sqs.us-east-1.amazonaws.com/<accountId> but this does not seem to be working. It is still trying to look for the queue in my own account Id and throwing a queue not found error.
Any ideas how to fix this and force the Spring AWS bean to consume from a specific AwsAccount?
You have a user that can access the queu in another account. That means you can run code with that user in your account and that can access the queue on another account.
Initializing a sqsclient will always use the account it is running on
You don't have to adjust this.
#Bean
fun amazonSQSAsyncClient(): AmazonSQSAsync = AmazonSQSAsyncClientBuilder.standard()
.withCredentials(AWSStaticCredentialsProvider(BasicAWSCredentials(awsProperties.accessKey, awsProperties.secretKey)))
.build()
You need to make sure the code can access the queue.
In the code you should set your queue URL like this:
https://sqs.<region>.amazonaws.com/<account>/<queuename>
, I quickly tried to access a queue from another account. If the permissions on the queue are correctly set, you have two possibilities. The first one is using the queue URL instead of the name (I checked, it works). The second one is creating you own DestinationResolver and providing it to the SimpleMessageListenerContainer. I created a small app with Spring Boot and it worked well. I pasted you the code below.
In a next feature release I'll figure out a better way to support this use case.
package demo;
import com.amazonaws.services.sqs.AmazonSQS;
import com.amazonaws.services.sqs.model.GetQueueUrlRequest;
import com.amazonaws.services.sqs.model.GetQueueUrlResult;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.aws.core.env.ResourceIdResolver;
import org.springframework.cloud.aws.messaging.config.SimpleMessageListenerContainerFactory;
import org.springframework.cloud.aws.messaging.support.destination.DynamicQueueUrlDestinationResolver;
import org.springframework.context.annotation.Bean;
import org.springframework.messaging.core.DestinationResolutionException;
import org.springframework.messaging.core.DestinationResolver;
import org.springframework.messaging.handler.annotation.MessageMapping;
import org.springframework.util.Assert;
#SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#Bean
public MessageListener messageListener() {
return new MessageListener();
}
#Bean
public SimpleMessageListenerContainerFactory simpleMessageListenerFactory(AmazonSQS amazonSqs, ResourceIdResolver resourceIdResolver) {
SimpleMessageListenerContainerFactory factory = new SimpleMessageListenerContainerFactory();
factory.setDestinationResolver(new DynamicAccountAwareQueueUrlDestinationResolver(amazonSqs, resourceIdResolver));
return factory;
}
public static class DynamicAccountAwareQueueUrlDestinationResolver implements DestinationResolver<String> {
public static final String ACCOUNT_QUEUE_SEPARATOR = ":";
private final AmazonSQS amazonSqs;
private final DynamicQueueUrlDestinationResolver dynamicQueueUrlDestinationResolverDelegate;
public DynamicAccountAwareQueueUrlDestinationResolver(AmazonSQS amazonSqs, ResourceIdResolver resourceIdResolver) {
Assert.notNull(amazonSqs, "amazonSqs must not be null");
this.amazonSqs = amazonSqs;
this.dynamicQueueUrlDestinationResolverDelegate = new DynamicQueueUrlDestinationResolver(amazonSqs, resourceIdResolver);
}
#Override
public String resolveDestination(String queue) throws DestinationResolutionException {
if (queue.contains(ACCOUNT_QUEUE_SEPARATOR)) {
String account = queue.substring(0, queue.indexOf(ACCOUNT_QUEUE_SEPARATOR));
String queueName = queue.substring(queue.indexOf(ACCOUNT_QUEUE_SEPARATOR) + 1);
GetQueueUrlResult queueUrlResult = this.amazonSqs.getQueueUrl(new GetQueueUrlRequest()
.withQueueName(queueName)
.withQueueOwnerAWSAccountId(account));
return queueUrlResult.getQueueUrl();
} else {
return this.dynamicQueueUrlDestinationResolverDelegate.resolveDestination(queue);
}
}
}
public static class MessageListener {
private static Logger LOG = LoggerFactory.getLogger(MessageListener.class);
#MessageMapping("633332177961:queue-name")
public void listen(String message) {
LOG.info("Received message: {}", message);
}
}
}

Need to send Json to JMS using Apache Camel Spring Boot

I am using spring-boot for Apache Camel and I am able to send messages from one queue to another queue.
blow is the code
import com.google.gson.Gson;
import org.apache.camel.Exchange;
import org.apache.camel.LoggingLevel;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component;
#Component
public class JmsRoute extends RouteBuilder {
static final Logger log = LoggerFactory.getLogger(JmsRoute.class);
#Override
public void configure() throws Exception {
from("{{inbound.endpoint}}")
.transacted()
.log(LoggingLevel.INFO, log, "Recived Message")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
Student student = new Student();
Gson gson = new Gson();
String json = gson.toJson(student);
log.info("Exchange: {}", exchange.getMessage().getBody());
log.info("**********:{}", exchange.getMessage());
}
})
.loop()
.simple("{{outbound.loop.count}}")
.to("{{outbound.endpoint}}")
.log(LoggingLevel.INFO, log, "Message Sent")
.end();
}
}
I need to send to convert Object to JSON(Which I can convert using Gson) and then send it over the queue.
I am new to Camel and tried to find the solution for this over the internet but couldn't get any help.
Can anyone please help here ?
You are not setting the json to the exchange body.
public void process(Exchange exchange) throws Exception {
Student student = new Student();
Gson gson = new Gson();
String json = gson.toJson(student);
exchange.getIn().setBody(json); //processor does not do this automatically
log.info("Exchange: {}", exchange.getMessage().getBody());
log.info("**********:{}", exchange.getMessage());
}
I recommend checking out the new documentation pages for apache camel. They are great. Especially if you are just starting to use the framework. See https://camel.apache.org/manual/latest/getting-started.html

Spring 5 reactive websocket and multiple endpoints?

We are doing a little hackathon at work and I wanted to try some new technology to get away from the usual controller.
I started using Spring Webflux with reactive WebSockets and everything is working fine so far. I configured my WebSocket handler as follows:
import my.handler.DomWebSocketHandler;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.reactive.HandlerMapping;
import org.springframework.web.reactive.config.WebFluxConfigurer;
import org.springframework.web.reactive.handler.SimpleUrlHandlerMapping;
import org.springframework.web.reactive.socket.WebSocketHandler;
import org.springframework.web.reactive.socket.server.support.WebSocketHandlerAdapter;
import java.util.HashMap;
import java.util.Map;
#Configuration
public class AppConfig implements WebFluxConfigurer {
#Autowired
private WebSocketHandler domWebSocketHandler;
#Bean
public HandlerMapping webSocketMapping() {
Map<String, WebSocketHandler> map = new HashMap<>();
map.put("/event-emitter", domWebSocketHandler);
SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping();
mapping.setOrder(1);
mapping.setUrlMap(map);
return mapping;
}
#Bean
public WebSocketHandlerAdapter handlerAdapter() {
return new WebSocketHandlerAdapter();
}
}
After some more research, I learned that it is best practice to work with one connection per client.
Furthermore using more than a web-socket per browsing session for the same application seems overkill since you can use pub/sub channels. See answer here
Is there a way to restrict the connections per client and use only one endpoint for all required client "requests" or would it be better to create additional endpoints (like you would with a normal controller)?
Thank you in advance for the help.

Spring Boot with Apache Kafka: Messages not being read

I am currently setting up a Spring Boot application with Kafka listener.
I am trying to code only the consumer. For producer, I am manually sending message from the Kafka console for now.
I followed the example:
http://www.source4code.info/2016/09/spring-kafka-consumer-producer-example.html
I tried running this as a Spring Boot application but not able to see any messages being received. There are already some messages in my local topic of Kafka.
C:\software\kafka_2.11-0.10.1.0\kafka_2.11-0.10.1.0\kafka_2.11-0.10.1.0\bin\wind
ows>kafka-console-producer.bat --broker-list localhost:9092 --topic test
this is a message
testing again
My Spring Boot application is:
#EnableDiscoveryClient
#SpringBootApplication
public class KafkaApplication {
/**
* Run the application using Spring Boot and an embedded servlet engine.
*
* #param args
* Program arguments - ignored.
*/
public static void main(String[] args) {
// Tell server to look for registration.properties or registration.yml
System.setProperty("spring.config.name", "kafka-server");
SpringApplication.run(KafkaApplication.class, args);
}
}
And Kafka configuration is:
package kafka;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.IntegerDeserializer;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import java.util.HashMap;
import java.util.Map;
#Configuration
#EnableKafka
public class KafkaConsumerConfig {
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
//factory.setConcurrency(1);
//factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> propsMap = new HashMap();
propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
//propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
//propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
//propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
//propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, "group1");
//propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return propsMap;
}
#Bean
public Listener listener() {
return new Listener();
}
}
And Kafka listener is:
package kafka;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import java.util.concurrent.CountDownLatch;
import java.util.logging.Logger;
public class Listener {
protected Logger logger = Logger.getLogger(Listener.class
.getName());
public CountDownLatch getCountDownLatch1() {
return countDownLatch1;
}
private CountDownLatch countDownLatch1 = new CountDownLatch(1);
#KafkaListener(topics = "test")
public void listen(ConsumerRecord<?, ?> record) {
logger.info("Received message: " + record);
System.out.println("Received message: " + record);
countDownLatch1.countDown();
}
}
I am trying this for the first time. Please let me know if I am doing anything wrong. Any help will be greatly appreciated.
You did not set ConsumerConfig.AUTO_OFFSET_RESET_CONFIG so the default is "latest". Set it to "earliest" so the consumer will receive messages already in the topic.
ConsumerConfig.AUTO_OFFSET_RESET_CONFIG takes effect only if the consumer group does not already have an offset for a topic partition. If you already ran the consumer with the "latest" setting, then running the consumer again with a different setting does not change the offset. The consumer must use a different group so Kafka will assign offsets for that group.
Observed that you dit comment out the consumer group.id property.
//propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, "group1");
Let's see how is quoted in the Kafka official document:
A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using subscribe(topic) or the Kafka-based offset management strategy.
Tried to uncomement that row and the consumer worked.
You will need to annotate your Listener class with either #Service or #Component so that Spring Boot can load the Kafka listener.
package kafka;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import java.util.concurrent.CountDownLatch;
import java.util.logging.Logger;
#Component
public class Listener {
protected Logger logger = Logger.getLogger(Listener.class
.getName());
public CountDownLatch getCountDownLatch1() {
return countDownLatch1;
}
private CountDownLatch countDownLatch1 = new CountDownLatch(1);
#KafkaListener(topics = "test")
public void listen(ConsumerRecord<?, ?> record) {
logger.info("Received message: " + record);
System.out.println("Received message: " + record);
countDownLatch1.countDown();
}
}
The above suggestions are good. If you have followed all of them but it did not work, please check if lazy loading is set to false for your application.
The lazy loading is false by default. However if your application had explicit setting like the one below,
spring.main.lazy-initialization=true
Please comment it or make it to false

Validating JMS payload in Spring

I have a simple service sending emails. It can be invoked using REST and JMS APIs. I want the requests to be validated before processing.
When I invoke it using REST I can see that org.springframework.validation.DataBinder invokes void validate(Object target, Errors errors, Object... validationHints) and then validator from Hibernate is invoked. This works as expected.
The problem is I can't achieve the same effect with JMS Listener. The listener is implemented as follows:
import lombok.AllArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.jms.annotation.JmsListener;
import org.springframework.stereotype.Component;
import our.domain.mailing.Mailing;
import our.domain.mailing.jms.api.SendEmailFromTemplateRequest;
import our.domain.mailing.jms.api.SendSimpleEmailRequest;
import javax.validation.Valid;
#ConditionalOnProperty("jms.configuration.destination")
#Component
#AllArgsConstructor(onConstructor = #__(#Autowired))
#Slf4j
public class SendMailMessageListener {
Mailing mailing;
#JmsListener(destination = "${jms.configuration.destination}")
public void sendEmailUsingTemplate(#Valid SendEmailFromTemplateRequest request) {
log.debug("Received jms message: {}", request);
mailing.sendEmailTemplate(
request.getEmailDetails().getRecipients(),
request.getEmailDetails().getAccountType(),
request.getTemplateDetails().getTemplateCode(),
request.getTemplateDetails().getLanguage(),
request.getTemplateDetails().getParameters());
}
#JmsListener(destination = "${jms.configuration.destination}")
public void sendEmail(#Valid SendSimpleEmailRequest request) {
log.debug("Received jms message: {}", request);
mailing.sendEmail(
request.getRecipients(),
request.getSubject(),
request.getMessage());
}
}
The methods receive payloads but they are not validated. It's Spring Boot application and I have #EnableJms added. Can you guide what part of Spring source code is responsible for discovering #Validate and handling it? If you have any hints on how to make it running I would appreciate it a lot.
The solution is simple and was clearly described in official documentation: 29.6.3 Annotated endpoint method signature. There are few things you have to do:
Provide configuration implementing JmsListenerConfigurer (add #Configuration class implementing this interface)
Add annotation #EnableJms on the top of this configuration
Create bean DefaultMessageHandlerMethodFactory. It can be done in this configuration
Implement method void configureJmsListeners(JmsListenerEndpointRegistrar registrar) of interface JmsListenerConfigurer implemented by your configuration and set MessageHandlerMethodFactory using the bean you've just created
Add #Validated instead of #Valid to payload parameters
You can use #Valid in your listeners. Your answer was very close to it. In the step when you create DefaultMessageHandlerMethodFactory call .setValidator(validator) where validator is from org.springframework.validation. You can configure validator like this:
#Bean
public LocalValidatorFactoryBean configureValidator ()
{
return new LocalValidatorFactoryBean();
}
And then inject validator instance into your jms config

Resources