Spring Kafka #SendTo Not Sending Headers - spring

I'm sending a message to Kafka using the ReplyingKafkaTemplate and it's sending the message with a kafka_correlationId. However, when it hits my #KafkaListener method and forwards it to a reply topic, the headers are lost.
How do I preserve the kafka headers?
Here's my method signature:
#KafkaListener(topics = "input")
#SendTo("reply")
public List<CustomOutput> consume(List<CustomInput> inputs) {
... /* some processing */
return outputs;
}
I've created a ProducerInterceptor so I can see what headers are being sent from the ReplyingKafkaTemplate, as well as from the #SendTo annotation. From that, another strange thing is that the ReplyingKafkaTemplate is not adding the documented kafka_replyTopic header to the message.
Here's how the ReplyingKafkaTemplate is configured:
#Bean
public KafkaMessageListenerContainer<Object, Object> replyContainer(ConsumerFactory<Object, Object> cf) {
ContainerProperties containerProperties = new ContainerProperties(requestReplyTopic);
return new KafkaMessageListenerContainer<>(cf, containerProperties);
}
#Bean
public ReplyingKafkaTemplate<Object, Object, Object> replyingKafkaTemplate(ProducerFactory<Object, Object> pf, KafkaMessageListenerContainer<Object, Object> container) {
return new ReplyingKafkaTemplate<>(pf, container);
}
I'm not sure if this is relevant, but I've added Spring Cloud Sleuth as a dependency as well, and the span/trace headers are there when I'm sending messages, but new ones are generated when a message is forwarded.

Arbitrary headers from the request message are not copied to the reply message by default, only the kafka_correlationId.
Starting with version 2.2, you can configure a ReplyHeadersConfigurer which is called to determine which header(s) should be copied.
See the documentation.
Starting with version 2.2, you can add a ReplyHeadersConfigurer to the listener container factory. This is consulted to determine which headers you want to set in the reply message.
EDIT
BTW, in 2.2 the RKT sets up the replyTo automatically if there is no header.
With 2.1.x, it can be done, but it's a bit involved and you have to do some of the work yourself. The key is to receive and reply a Message<?>...
#KafkaListener(id = "so55622224", topics = "so55622224")
#SendTo("dummy.we.use.the.header.instead")
public Message<?> listen(Message<String> in) {
System.out.println(in);
Headers nativeHeaders = in.getHeaders().get(KafkaHeaders.NATIVE_HEADERS, Headers.class);
byte[] replyTo = nativeHeaders.lastHeader(KafkaHeaders.REPLY_TOPIC).value();
byte[] correlation = nativeHeaders.lastHeader(KafkaHeaders.CORRELATION_ID).value();
return MessageBuilder.withPayload(in.getPayload().toUpperCase())
.setHeader("myHeader", nativeHeaders.lastHeader("myHeader").value())
.setHeader(KafkaHeaders.CORRELATION_ID, correlation)
.setHeader(KafkaHeaders.TOPIC, replyTo)
.build();
}
// This is used to send the reply - needs a header mapper
#Bean
public KafkaTemplate<?, ?> kafkaTemplate(ProducerFactory<Object, Object> kafkaProducerFactory) {
KafkaTemplate<Object, Object> kafkaTemplate = new KafkaTemplate<>(kafkaProducerFactory);
MessagingMessageConverter messageConverter = new MessagingMessageConverter();
messageConverter.setHeaderMapper(new SimpleKafkaHeaderMapper("*")); // map all byte[] headers
kafkaTemplate.setMessageConverter(messageConverter);
return kafkaTemplate;
}
#Bean
public ApplicationRunner runner(ReplyingKafkaTemplate<String, String, String> template) {
return args -> {
Headers headers = new RecordHeaders();
headers.add(new RecordHeader("myHeader", "myHeaderValue".getBytes()));
headers.add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, "so55622224.replies".getBytes())); // automatic in 2.2
ProducerRecord<String, String> record = new ProducerRecord<>("so55622224", null, null, "foo", headers);
RequestReplyFuture<String, String, String> future = template.sendAndReceive(record);
ConsumerRecord<String, String> reply = future.get();
System.out.println("Reply: " + reply.value() + " myHeader="
+ new String(reply.headers().lastHeader("myHeader").value()));
};
}

Related

My SQSListener returns body as messageID, And How do I get send MessageId with QueueMessagingTemplate Spring Boot?

I have 2 issues regarding Spring AWS-SQS sdk, (Or maybe I am doing it wrong).
First is that I used CLI earlier and I managed to get the message Id of the sent message example:
aws sqs send-message --queue-url https://sqs.us-west-2.amazonaws.com/testqueue --message-body hooray
'{
"MD5OfMessageBody": "d3101ad",
"MessageId": "jdhj-933"
}
Now I tried with spring-cloud-starter-aws-messaging and I setup a Queue Messaging template like this
private final QueueMessagingTemplate queueMessagingTemplate;
public SqsQueueService(#Qualifier("amazonSQSAsync") final AmazonSQSAsync amazonSQS) {
this.queueMessagingTemplate = new QueueMessagingTemplate(amazonSQS);
}
public void sendMessage(String queueName, String queueMessage) {
Map<String, Object> headers = new HashMap<>();
queueMessagingTemplate.convertAndSend(queueName, queueMessage, headers);
}
I can seem to get the message Id of the sent message using queueMessagingTemplate.convertAndSend(queueName, queueMessage, headers);
I need the messageId to fulfil some business logic.
The second issue is my listener can receive messages however the messageID is null as well;
#Async
#SqsListener(value = "${notification.sqs-queue-url}", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void listen(Acknowledgment acknowledgment, String message, String messageId) {
//messageId is equal to message here. which is wrong for me
}
The message is always equal to messageId, which is confusing, Any advice on where I maybe going wrong?
I changed the listner method signature to
#Async
#SqsListener(value = "${queue-url}", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void listen(Acknowledgment acknowledgment, String message, #Headers MessageHeaders headers) throws ExecutionException, InterruptedException {
String messageId = (String) headers.get("MessageId");
acknowledgment.acknowledge().get();
}
Then extracted the messageId from the headers map

Spring Integration HttpRequestExecutingMessageHandler ContentType Issue

I am facing a problem with Spring Integration. I am trying to execute a rest call via HttpRequestExecutingMessageHandler. My rest endpoint is accepting content-type 'application/json' only.
The problem is that the HttpRequestExecutingMessageHandler is posting with content-type 'text/plain;charset=UTF-8'.
#ServiceActivator(inputChannel = "transformRequestToJsonChannel",
outputChannel = "httpRequestOutChannel")
public Message<?> transformRequest(Message<DocumentConverterRequest>
message)
{
LOG.info("transforming document converter request to json: '{}'",
ObjectToJsonTransformer transformer = new ObjectToJsonTransformer();
transformer.setContentType(MediaType.APPLICATION_JSON_UTF8_VALUE);
Object payload = transformer.transform(message).getPayload();
LOG.info("payload: '{}'", payload.toString());
return MessageBuilder.withPayload(payload).build();
}
#Bean
#ServiceActivator(inputChannel = "httpRequestOutChannel")
public HttpRequestExecutingMessageHandler outbound() {
HttpRequestExecutingMessageHandler handler = new
HttpRequestExecutingMessageHandler(documentConverterRestUrl);
handler.setHttpMethod(HttpMethod.POST);
handler.setErrorHandler(httpResponseErrorHandler);
handler.setExpectedResponseType(String.class);
handler.setCharset(Charset.defaultCharset().name());
HeaderMapper<HttpHeaders> mapper = new DefaultHttpHeaderMapper();
HttpHeaders httpHeaders = new HttpHeaders();
httpHeaders.add(HttpHeaders.CONTENT_TYPE,
MediaType.APPLICATION_JSON_VALUE);
mapper.toHeaders(httpHeaders);
handler.setHeaderMapper(mapper);
handler.setOutputChannel(httpResponseChannel());
return handler;
}
How can i override the content-type?
This piece of code does nothing:
HeaderMapper<HttpHeaders> mapper = new DefaultHttpHeaderMapper();
HttpHeaders httpHeaders = new HttpHeaders();
httpHeaders.add(HttpHeaders.CONTENT_TYPE,
MediaType.APPLICATION_JSON_VALUE);
mapper.toHeaders(httpHeaders);
That toHeaders() is called from the HttpRequestExecutingMessageHandler when we receive response. It really useless to use it explicitly in your code, especially in the bean definition phase and when you ignore a result.
You don't need to use an explicit HeaderMapper at all: a default one should be enough for you.
The ObjectToJsonTransformer really maps that setContentType() into a headers of the message it replies:
if (headers.containsKey(MessageHeaders.CONTENT_TYPE)) {
// override, unless empty
if (this.contentTypeExplicitlySet && StringUtils.hasLength(this.contentType)) {
headers.put(MessageHeaders.CONTENT_TYPE, this.contentType);
}
}
else if (StringUtils.hasLength(this.contentType)) {
headers.put(MessageHeaders.CONTENT_TYPE, this.contentType);
}
So, there is a proper content type to map. By default HttpRequestExecutingMessageHandler uses:
/**
* Factory method for creating a basic outbound mapper instance.
* This will map all standard HTTP request headers when sending an HTTP request,
* and it will map all standard HTTP response headers when receiving an HTTP response.
* #return The default outbound mapper.
*/
public static DefaultHttpHeaderMapper outboundMapper() {
With an appropriate set of headers to map to HTTP request and from HTTP response.
The new DefaultHttpHeaderMapper() brings just an empty set of headers to map.
Please, raise an issue to improve JavaDocs and Reference Manual to note that default ctor of that class doesn't bring any headers to map.
#Bean
#ServiceActivator(inputChannel = "httpRequestOutChannel")
public HttpRequestExecutingMessageHandler outbound() {
HttpRequestExecutingMessageHandler handler = Http.outboundGateway(documentConverterRestUrl)
.httpMethod(HttpMethod.POST)
.messageConverters(new MappingJackson2HttpMessageConverter())
.mappedRequestHeaders("Content-Type")
.get();
handler.setOutputChannel(httpResponseChannel());
return handler;
}
I removed my ObjectToJsonTransformer, because the messageConverters(new MappingJackson2HttpMessageConverter()) is doing the stuff.
Also i had to add the content-type to my message header: .setHeaderIfAbsent(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)

Streaming upload via #Bean-provided RestTemplateBuilder buffers full file

I'm building a reverse-proxy for uploading large files (multiple gigabytes), and therefore want to use a streaming model that does not buffer entire files. Large buffers would introduce latency and, more importantly, they could result in out-of-memory errors.
My client class contains
#Autowired private RestTemplate restTemplate;
#Bean
public RestTemplate restTemplate(RestTemplateBuilder restTemplateBuilder) {
int REST_TEMPLATE_MODE = 1; // 1=streams, 2=streams, 3=buffers
return
REST_TEMPLATE_MODE == 1 ? new RestTemplate() :
REST_TEMPLATE_MODE == 2 ? (new RestTemplateBuilder()).build() :
REST_TEMPLATE_MODE == 3 ? restTemplateBuilder.build() : null;
}
and
public void upload_via_streaming(InputStream inputStream, String originalname) {
SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory();
requestFactory.setBufferRequestBody(false);
restTemplate.setRequestFactory(requestFactory);
InputStreamResource inputStreamResource = new InputStreamResource(inputStream) {
#Override public String getFilename() { return originalname; }
#Override public long contentLength() { return -1; }
};
MultiValueMap<String, Object> body = new LinkedMultiValueMap<String, Object>();
body.add("myfile", inputStreamResource);
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.MULTIPART_FORM_DATA);
HttpEntity<MultiValueMap<String, Object>> requestEntity = new HttpEntity<>(body,headers);
String response = restTemplate.postForObject(UPLOAD_URL, requestEntity, String.class);
System.out.println("response: "+response);
}
This is working, but notice my REST_TEMPLATE_MODE value controls whether or not it meets my streaming requirement.
Question: Why does REST_TEMPLATE_MODE == 3 result in full-file buffering?
References:
How to forward large files with RestTemplate?
How to send Multipart form data with restTemplate Spring-mvc
Spring - How to stream large multipart file uploads to database without storing on local file system -- establishing the InputStream
How to autowire RestTemplate using annotations
Design notes and usage caveats, also: restTemplate does not support streaming downloads
In short, the instance of RestTemplateBuilder provided as an #Bean by Spring Boot includes an interceptor (filter) associated with actuator/metrics -- and the interceptor interface requires buffering of the request body into a simple byte[].
If you instantiate your own RestTemplateBuilder or RestTemplate from scratch, it won't include this by default.
I seem to be the only person visiting this post, but just in case it helps someone before I get around to posting a complete solution, I've found a big clue:
restTemplate.getInterceptors().forEach(item->System.out.println(item));
displays...
org.SF.boot.actuate.metrics.web.client.MetricsClientHttpRequestInterceptor
If I clear the interceptor list via setInterceptors, it solves the problem. Furthermore, I found that any interceptor, even if it only performs a NOP, will introduce full-file buffering.
public class SimpleClientHttpRequestFactory { ...
I have explicitly set bufferRequestBody = false, but apparently this code is bypassed if interceptors are used. This would have been nice to know earlier...
#Override
public ClientHttpRequest createRequest(URI uri, HttpMethod httpMethod) throws IOException {
HttpURLConnection connection = openConnection(uri.toURL(), this.proxy);
prepareConnection(connection, httpMethod.name());
if (this.bufferRequestBody) {
return new SimpleBufferingClientHttpRequest(connection, this.outputStreaming);
}
else {
return new SimpleStreamingClientHttpRequest(connection, this.chunkSize, this.outputStreaming);
}
}
public abstract class InterceptingHttpAccessor extends HttpAccessor { ...
This shows that the InterceptingClientHttpRequestFactory is used if the list of interceptors is not empty.
/**
* Overridden to expose an {#link InterceptingClientHttpRequestFactory}
* if necessary.
* #see #getInterceptors()
*/
#Override
public ClientHttpRequestFactory getRequestFactory() {
List<ClientHttpRequestInterceptor> interceptors = getInterceptors();
if (!CollectionUtils.isEmpty(interceptors)) {
ClientHttpRequestFactory factory = this.interceptingRequestFactory;
if (factory == null) {
factory = new InterceptingClientHttpRequestFactory(super.getRequestFactory(), interceptors);
this.interceptingRequestFactory = factory;
}
return factory;
}
else {
return super.getRequestFactory();
}
}
class InterceptingClientHttpRequest extends AbstractBufferingClientHttpRequest { ...
The interfaces make it clear that using InterceptingClientHttpRequest requires buffering body to a byte[]. There is not an option to use a streaming interface.
#Override
public ClientHttpResponse execute(HttpRequest request, byte[] body) throws IOException {

Combining #SqsListener and #RequestMapping

We're currently in the middle of migrating our current architecture into Spring-AWS-based microservices. One of my tasks is to research on how our microservices communicate with one another. I'm aiming to set-up a hybrid system of RESTful HTTP endpoints and SQS producers and consumers.
As an example, I have the below code:
#SqsListener("request_queue")
#SendTo("response_queue")
#PostMapping("/send")
public Object send(#RequestBody Request request, #Header("SenderId") String senderId) {
if (senderId != null && !senderId.trim().isEmpty()) {
logger.info("SQS Message Received!");
logger.info("Sender ID: ".concat(senderId));
request = new Gson().fromJson(payload, Request.class);
}
Response response = processRequest(request); // Process request
return response;
}
Theoretically, this method should be able to handle the following:
Receive a Request object via HTTP
Continually poll the request_queue for a message containing the Request object
As an HTTP endpoint, the method returns no error. However, as an SQS listener, it runs into the following exception:
org.springframework.messaging.converter.MessageConversionException:
Cannot convert from [java.lang.String] to [com.oriente.salt.Request] for
GenericMessage [payload={"source":"QueueTester","message":"This is a wonderful
message send by queue from Habanero to Salt. Spicy.","msisdn":"+639772108550"},
headers={LogicalResourceId=salt_queue, ApproximateReceiveCount=1,
SentTimestamp=1523444620218, ....
I've tried to annotate the Request param with #Payload, but to no avail. Currently I've also set-up the AWS config via Java, as seen below:
ConsuerAWSSQSConfig.java
#Configuration
public class ConsumerAWSSQSConfig {
#Bean
public SimpleMessageListenerContainer simpleMessageListenerContainer() {
SimpleMessageListenerContainer msgListenerContainer = simpleMessageListenerContainerFactory()
.createSimpleMessageListenerContainer();
msgListenerContainer.setMessageHandler(queueMessageHandler());
return msgListenerContainer;
}
#Bean
public SimpleMessageListenerContainerFactory simpleMessageListenerContainerFactory() {
SimpleMessageListenerContainerFactory msgListenerContainerFactory = new SimpleMessageListenerContainerFactory();
msgListenerContainerFactory.setAmazonSqs(amazonSQSClient());
return msgListenerContainerFactory;
}
#Bean
public QueueMessageHandler queueMessageHandler() {
QueueMessageHandlerFactory queueMsgHandlerFactory = new QueueMessageHandlerFactory();
queueMsgHandlerFactory.setAmazonSqs(amazonSQSClient());
QueueMessageHandler queueMessageHandler = queueMsgHandlerFactory.createQueueMessageHandler();
List<HandlerMethodArgumentResolver> list = new ArrayList<>();
HandlerMethodArgumentResolver resolver = new PayloadArgumentResolver(new MappingJackson2MessageConverter());
list.add(resolver);
queueMessageHandler.setArgumentResolvers(list);
return queueMessageHandler;
}
#Lazy
#Bean(name = "amazonSQS", destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQSClient() {
AmazonSQSAsync awsSQSAsync = AmazonSQSAsyncClientBuilder.standard().withRegion(Regions.AP_SOUTHEAST_1).build();
return awsSQSAsync;
}
}
What do you guys think?

No converter found to convert payload type .. to expected payload type [byte[]]

I have a Stomp Over WebSocket client using Stomp.js that send a message to a queue:
var destinationProductProd_02 = "jms.queue.shat";
function sendMessageProduct() {
var product = {
productId : "111",
name : "laptop",
quantity: 2
}
var beforeSend = JSON.stringify(product);
console.log("typeof message: "+ typeof beforeSend); // <<--- String
stompClient.send(destinationProductProd_02, {}, beforeSend);
}
And in the server side I have
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrency("3-10");
SimpleMessageConverter s = new SimpleMessageConverter();
factory.setMessageConverter(s);
return factory;
}
SimpleMessageConverter is the default converter that use Spring.
My listerner is the next:
#JmsListener(containerFactory = "jmsListenerContainerFactory", destination = ORDER_QUEUE)
public void receiveMessage(Session ses, #Payload final Message message, #Headers final Map<String, Object> headers) {
System.out.println("MessageReceiver::receiveMessage(product) payload class: "+ message.getPayload().getClass());
}
the message.getPayload().getClass() indicates that the payload type is an byte array ([B).
Why I'm receiving binary array if I'm sending text?
Or how can I cast this byte array to Java object.
What happens if I send JSON and Serialized Java Object to the same queue... how manage two different type of message in the same queue? I want use some like:
public void receiveMessage(#Payload final Message<Product> message){...}
where product is a POJO class

Resources