Combining #SqsListener and #RequestMapping - spring

We're currently in the middle of migrating our current architecture into Spring-AWS-based microservices. One of my tasks is to research on how our microservices communicate with one another. I'm aiming to set-up a hybrid system of RESTful HTTP endpoints and SQS producers and consumers.
As an example, I have the below code:
#SqsListener("request_queue")
#SendTo("response_queue")
#PostMapping("/send")
public Object send(#RequestBody Request request, #Header("SenderId") String senderId) {
if (senderId != null && !senderId.trim().isEmpty()) {
logger.info("SQS Message Received!");
logger.info("Sender ID: ".concat(senderId));
request = new Gson().fromJson(payload, Request.class);
}
Response response = processRequest(request); // Process request
return response;
}
Theoretically, this method should be able to handle the following:
Receive a Request object via HTTP
Continually poll the request_queue for a message containing the Request object
As an HTTP endpoint, the method returns no error. However, as an SQS listener, it runs into the following exception:
org.springframework.messaging.converter.MessageConversionException:
Cannot convert from [java.lang.String] to [com.oriente.salt.Request] for
GenericMessage [payload={"source":"QueueTester","message":"This is a wonderful
message send by queue from Habanero to Salt. Spicy.","msisdn":"+639772108550"},
headers={LogicalResourceId=salt_queue, ApproximateReceiveCount=1,
SentTimestamp=1523444620218, ....
I've tried to annotate the Request param with #Payload, but to no avail. Currently I've also set-up the AWS config via Java, as seen below:
ConsuerAWSSQSConfig.java
#Configuration
public class ConsumerAWSSQSConfig {
#Bean
public SimpleMessageListenerContainer simpleMessageListenerContainer() {
SimpleMessageListenerContainer msgListenerContainer = simpleMessageListenerContainerFactory()
.createSimpleMessageListenerContainer();
msgListenerContainer.setMessageHandler(queueMessageHandler());
return msgListenerContainer;
}
#Bean
public SimpleMessageListenerContainerFactory simpleMessageListenerContainerFactory() {
SimpleMessageListenerContainerFactory msgListenerContainerFactory = new SimpleMessageListenerContainerFactory();
msgListenerContainerFactory.setAmazonSqs(amazonSQSClient());
return msgListenerContainerFactory;
}
#Bean
public QueueMessageHandler queueMessageHandler() {
QueueMessageHandlerFactory queueMsgHandlerFactory = new QueueMessageHandlerFactory();
queueMsgHandlerFactory.setAmazonSqs(amazonSQSClient());
QueueMessageHandler queueMessageHandler = queueMsgHandlerFactory.createQueueMessageHandler();
List<HandlerMethodArgumentResolver> list = new ArrayList<>();
HandlerMethodArgumentResolver resolver = new PayloadArgumentResolver(new MappingJackson2MessageConverter());
list.add(resolver);
queueMessageHandler.setArgumentResolvers(list);
return queueMessageHandler;
}
#Lazy
#Bean(name = "amazonSQS", destroyMethod = "shutdown")
public AmazonSQSAsync amazonSQSClient() {
AmazonSQSAsync awsSQSAsync = AmazonSQSAsyncClientBuilder.standard().withRegion(Regions.AP_SOUTHEAST_1).build();
return awsSQSAsync;
}
}
What do you guys think?

Related

API call not returning response with HttpRequestExecutingMessageHandler

I'm facing an issue where which ever API I call first using
HttpRequestExecutingMessageHandler
will return response back and second API called again using
HttpRequestExecutingMessageHandler
just hangs and return 504 timeout response back even though the request gets accepted for second API at server and processing is also done. Both the APIs call methods are listed in two different classes with separate output queue channels.
If I now restart the server and call second API first, this time, will return 200 response back but now first API start failing to respond back the 200 response.
#Configuration
class OutgoingHttpChannelAdapterConfig {
#Bean
#Qualifier("responseChannel1")
fun fromResponseChannel1(): QueueChannel = MessageChannels.queue().get()
#Bean
#Qualifier("customRestTemplateOut")
fun customRestTemplateOut(): RestTemplate {
return RestTemplate()
}
#Bean
#ServiceActivator(inputChannel = "forwardDataChannel")
#Throws(MessageHandlingException::class)
fun forwardRequestMethod(
#Qualifier("customRestTemplateOut") restTemplate: RestTemplate
): MessageHandler {
val headerMapper = DefaultHttpHeaderMapper()
headerMapper.setOutboundHeaderNames("Authorization","key")
val msgHandler = HttpRequestExecutingMessageHandler(url)
msgHandler.setHeaderMapper(headerMapper)
msgHandler.setHttpMethod(HttpMethod.POST)
msgHandler.isExpectReply = true
msgHandler.outputChannel = fromResponseChannel1()
msgHandler.setExpectedResponseType(DataResponse::class.java)
return msgHandler
}
}
#Configuration
class IncomingHttpChannelAdapterConfig{
#Bean
#Qualifier("responseChannel2")
fun fromResponseChannel2(): QueueChannel = MessageChannels.queue().get()
#Bean
#Qualifier("customRestTemplate")
fun customRestTemplate(): RestTemplate {
return RestTemplate()
}
#Bean
#ServiceActivator(inputChannel = "acceptRequestChannel")
#Throws(MessageHandlingException::class)
fun acceptRequestMethod(
#Qualifier("customRestTemplate") restTemplate: RestTemplate
): MessageHandler {
val parser = SpelExpressionParser()
val map = mapOf<String, Expression>(
"id" to parser.parseRaw("payload.id")
)
val msgHandler = HttpRequestExecutingMessageHandler(url, restTemplate)
msgHandler.setHeaderMapper(headerMapper)
msgHandler.setHttpMethod(HttpMethod.PUT)
msgHandler.outputChannel = fromResponseChannel2()
msgHandler.setUriVariableExpressions(map)
return msgHandler
}
}
#MessagingGateway(
defaultRequestChannel = "forwardDataChannel", errorChannel = "newErrorChannel",
defaultReplyChannel = "replyChannel1"
)
interface ForwardRequest {
fun forwardRequest(msg: Message<MessageNotification>): Message<*>
}
#MessagingGateway(
defaultRequestChannel = "acceptRequestChannel", errorChannel = "newErrorChannel",
defaultReplyChannel = "replyChannel2"
)
interface AcceptRequest {
fun acceptRequest(msg: Message<MessageNotification>): Message<*>
}
Right now we are not doing anything with queue channel. Its just used for placeholder purpose.

Return response messages in spring boot

I am working with spring boot with a h2 database. I would like to return a 201 message when the register is inserted succesfully and a 400 when is duplicated. I am using ResponseEntity to achieve this, fot example , the next is my create method from the Service:
#Override
public ResponseEntity<Object> createEvent(EventDTO eventDTO) {
if (eventRepository.findOne(eventDTO.getId()) != null) {
//THis is a test, I am looking for the correct message
return new ResponseEntity(HttpStatus.IM_USED);
}
Actor actor = actorService.createActor(eventDTO.getActor());
Repo repo = repoService.createRepo(eventDTO.getRepo());
Event event = new Event(eventDTO.getId(), eventDTO.getType(), actor, repo, createdAt(eventDTO));
eventRepository.save(event);
return new ResponseEntity(HttpStatus.CREATED);
}
This is my controller:
#PostMapping(value = "/events")
public ResponseEntity addEvent(#RequestBody EventDTO body) {
return eventService.createEvent(body);
}
But I'm not getting any message in the browser, I am doing different tests with postman and when I consult for all the events, the result is correct, but each time that I make a post I dont get any message in the browser, I am not pretty sure what is the cause of this issue. Any ideas?
The ideal way to send Response to the client is to create DTO/DAO with ResponseEntity in Controller
Controller.java
#PostMapping("/test")
public ResponseEntity<Object> testApi(#RequestBody User user)
{
System.out.println("User: "+user.toString());
return assetService.testApi(user);
}
Service.java
public ResponseEntity testApi(User user) {
if(user.getId()==1)
return new ResponseEntity("Created",HttpStatus.CREATED);
else
return new ResponseEntity("Used",HttpStatus.IM_USED);
// for BAD_REQUEST(400) return new ResponseEntity("Bad Request",HttpStatus.BAD_REQUEST);
}
Tested using Postman
Status 201 Created
Status 226 IM Used
Okay, I really don't feel good that service sending the ResponseEntity but not Controller.You could use #ResponseStatus and ExceptionHandler classes for these cases, like below.
Create a class in exception package
GlobalExceptionHandler.java
#ControllerAdvice
public class GlobalExceptionHandler {
#ResponseStatus(HttpStatus.BAD_REQUEST)
#ExceptionHandler(DataIntegrityViolationException.class) // NOTE : You could create a custom exception class to handle duplications
public void handleConflict() {
}
}
Controller.java
#PostMapping(value = "/events")
#ResponseStatus(HttpStatus.CREATED) // You don't have to return any object this will take care of the status
public void addEvent(#RequestBody EventDTO body) {
eventService.createEvent(body);
}
Now changing the service would look like,
Service.java
#Override
public void createEvent(EventDTO eventDTO) { // No need to return
if (eventRepository.findOne(eventDTO.getId()) != null) {
throw new DataIntegrityViolationException("Already exists"); // you have to throw the same exception which you have marked in Handler class
}
Actor actor = actorService.createActor(eventDTO.getActor());
Repo repo = repoService.createRepo(eventDTO.getRepo());
Event event = new Event(eventDTO.getId(), eventDTO.getType(), actor, repo, createdAt(eventDTO));
eventRepository.save(event);
}

Spring Kafka #SendTo Not Sending Headers

I'm sending a message to Kafka using the ReplyingKafkaTemplate and it's sending the message with a kafka_correlationId. However, when it hits my #KafkaListener method and forwards it to a reply topic, the headers are lost.
How do I preserve the kafka headers?
Here's my method signature:
#KafkaListener(topics = "input")
#SendTo("reply")
public List<CustomOutput> consume(List<CustomInput> inputs) {
... /* some processing */
return outputs;
}
I've created a ProducerInterceptor so I can see what headers are being sent from the ReplyingKafkaTemplate, as well as from the #SendTo annotation. From that, another strange thing is that the ReplyingKafkaTemplate is not adding the documented kafka_replyTopic header to the message.
Here's how the ReplyingKafkaTemplate is configured:
#Bean
public KafkaMessageListenerContainer<Object, Object> replyContainer(ConsumerFactory<Object, Object> cf) {
ContainerProperties containerProperties = new ContainerProperties(requestReplyTopic);
return new KafkaMessageListenerContainer<>(cf, containerProperties);
}
#Bean
public ReplyingKafkaTemplate<Object, Object, Object> replyingKafkaTemplate(ProducerFactory<Object, Object> pf, KafkaMessageListenerContainer<Object, Object> container) {
return new ReplyingKafkaTemplate<>(pf, container);
}
I'm not sure if this is relevant, but I've added Spring Cloud Sleuth as a dependency as well, and the span/trace headers are there when I'm sending messages, but new ones are generated when a message is forwarded.
Arbitrary headers from the request message are not copied to the reply message by default, only the kafka_correlationId.
Starting with version 2.2, you can configure a ReplyHeadersConfigurer which is called to determine which header(s) should be copied.
See the documentation.
Starting with version 2.2, you can add a ReplyHeadersConfigurer to the listener container factory. This is consulted to determine which headers you want to set in the reply message.
EDIT
BTW, in 2.2 the RKT sets up the replyTo automatically if there is no header.
With 2.1.x, it can be done, but it's a bit involved and you have to do some of the work yourself. The key is to receive and reply a Message<?>...
#KafkaListener(id = "so55622224", topics = "so55622224")
#SendTo("dummy.we.use.the.header.instead")
public Message<?> listen(Message<String> in) {
System.out.println(in);
Headers nativeHeaders = in.getHeaders().get(KafkaHeaders.NATIVE_HEADERS, Headers.class);
byte[] replyTo = nativeHeaders.lastHeader(KafkaHeaders.REPLY_TOPIC).value();
byte[] correlation = nativeHeaders.lastHeader(KafkaHeaders.CORRELATION_ID).value();
return MessageBuilder.withPayload(in.getPayload().toUpperCase())
.setHeader("myHeader", nativeHeaders.lastHeader("myHeader").value())
.setHeader(KafkaHeaders.CORRELATION_ID, correlation)
.setHeader(KafkaHeaders.TOPIC, replyTo)
.build();
}
// This is used to send the reply - needs a header mapper
#Bean
public KafkaTemplate<?, ?> kafkaTemplate(ProducerFactory<Object, Object> kafkaProducerFactory) {
KafkaTemplate<Object, Object> kafkaTemplate = new KafkaTemplate<>(kafkaProducerFactory);
MessagingMessageConverter messageConverter = new MessagingMessageConverter();
messageConverter.setHeaderMapper(new SimpleKafkaHeaderMapper("*")); // map all byte[] headers
kafkaTemplate.setMessageConverter(messageConverter);
return kafkaTemplate;
}
#Bean
public ApplicationRunner runner(ReplyingKafkaTemplate<String, String, String> template) {
return args -> {
Headers headers = new RecordHeaders();
headers.add(new RecordHeader("myHeader", "myHeaderValue".getBytes()));
headers.add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, "so55622224.replies".getBytes())); // automatic in 2.2
ProducerRecord<String, String> record = new ProducerRecord<>("so55622224", null, null, "foo", headers);
RequestReplyFuture<String, String, String> future = template.sendAndReceive(record);
ConsumerRecord<String, String> reply = future.get();
System.out.println("Reply: " + reply.value() + " myHeader="
+ new String(reply.headers().lastHeader("myHeader").value()));
};
}

Streaming upload via #Bean-provided RestTemplateBuilder buffers full file

I'm building a reverse-proxy for uploading large files (multiple gigabytes), and therefore want to use a streaming model that does not buffer entire files. Large buffers would introduce latency and, more importantly, they could result in out-of-memory errors.
My client class contains
#Autowired private RestTemplate restTemplate;
#Bean
public RestTemplate restTemplate(RestTemplateBuilder restTemplateBuilder) {
int REST_TEMPLATE_MODE = 1; // 1=streams, 2=streams, 3=buffers
return
REST_TEMPLATE_MODE == 1 ? new RestTemplate() :
REST_TEMPLATE_MODE == 2 ? (new RestTemplateBuilder()).build() :
REST_TEMPLATE_MODE == 3 ? restTemplateBuilder.build() : null;
}
and
public void upload_via_streaming(InputStream inputStream, String originalname) {
SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory();
requestFactory.setBufferRequestBody(false);
restTemplate.setRequestFactory(requestFactory);
InputStreamResource inputStreamResource = new InputStreamResource(inputStream) {
#Override public String getFilename() { return originalname; }
#Override public long contentLength() { return -1; }
};
MultiValueMap<String, Object> body = new LinkedMultiValueMap<String, Object>();
body.add("myfile", inputStreamResource);
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.MULTIPART_FORM_DATA);
HttpEntity<MultiValueMap<String, Object>> requestEntity = new HttpEntity<>(body,headers);
String response = restTemplate.postForObject(UPLOAD_URL, requestEntity, String.class);
System.out.println("response: "+response);
}
This is working, but notice my REST_TEMPLATE_MODE value controls whether or not it meets my streaming requirement.
Question: Why does REST_TEMPLATE_MODE == 3 result in full-file buffering?
References:
How to forward large files with RestTemplate?
How to send Multipart form data with restTemplate Spring-mvc
Spring - How to stream large multipart file uploads to database without storing on local file system -- establishing the InputStream
How to autowire RestTemplate using annotations
Design notes and usage caveats, also: restTemplate does not support streaming downloads
In short, the instance of RestTemplateBuilder provided as an #Bean by Spring Boot includes an interceptor (filter) associated with actuator/metrics -- and the interceptor interface requires buffering of the request body into a simple byte[].
If you instantiate your own RestTemplateBuilder or RestTemplate from scratch, it won't include this by default.
I seem to be the only person visiting this post, but just in case it helps someone before I get around to posting a complete solution, I've found a big clue:
restTemplate.getInterceptors().forEach(item->System.out.println(item));
displays...
org.SF.boot.actuate.metrics.web.client.MetricsClientHttpRequestInterceptor
If I clear the interceptor list via setInterceptors, it solves the problem. Furthermore, I found that any interceptor, even if it only performs a NOP, will introduce full-file buffering.
public class SimpleClientHttpRequestFactory { ...
I have explicitly set bufferRequestBody = false, but apparently this code is bypassed if interceptors are used. This would have been nice to know earlier...
#Override
public ClientHttpRequest createRequest(URI uri, HttpMethod httpMethod) throws IOException {
HttpURLConnection connection = openConnection(uri.toURL(), this.proxy);
prepareConnection(connection, httpMethod.name());
if (this.bufferRequestBody) {
return new SimpleBufferingClientHttpRequest(connection, this.outputStreaming);
}
else {
return new SimpleStreamingClientHttpRequest(connection, this.chunkSize, this.outputStreaming);
}
}
public abstract class InterceptingHttpAccessor extends HttpAccessor { ...
This shows that the InterceptingClientHttpRequestFactory is used if the list of interceptors is not empty.
/**
* Overridden to expose an {#link InterceptingClientHttpRequestFactory}
* if necessary.
* #see #getInterceptors()
*/
#Override
public ClientHttpRequestFactory getRequestFactory() {
List<ClientHttpRequestInterceptor> interceptors = getInterceptors();
if (!CollectionUtils.isEmpty(interceptors)) {
ClientHttpRequestFactory factory = this.interceptingRequestFactory;
if (factory == null) {
factory = new InterceptingClientHttpRequestFactory(super.getRequestFactory(), interceptors);
this.interceptingRequestFactory = factory;
}
return factory;
}
else {
return super.getRequestFactory();
}
}
class InterceptingClientHttpRequest extends AbstractBufferingClientHttpRequest { ...
The interfaces make it clear that using InterceptingClientHttpRequest requires buffering body to a byte[]. There is not an option to use a streaming interface.
#Override
public ClientHttpResponse execute(HttpRequest request, byte[] body) throws IOException {

Can I use Spring WebFlux to implement REST services which get data through Kafka request/response topics?

I'm developing REST service which, in turn, will query slow legacy system so response time will be measured in seconds. We also expect massive load so I was thinking about asynchronous/non-blocking approaches to avoid hundreds of "servlet" threads blocked on calls to slow system.
As I see this can be implemented using AsyncContext which is present in new servlet API specs. I even developed small prototype and it seems to be working.
On the other hand it looks like I can achieve the same using Spring WebFlux.
Unfortunately I did not find any example where custom "backend" calls are wrapped with Mono/Flux. Most of the examples just reuse already-prepared reactive connectors, like ReactiveCassandraOperations.java, etc.
My data flow is the following:
JS client --> Spring RestController --> send request to Kafka topic --> read response from Kafka reply topic --> return data to client
Can I wrap Kafka steps into Mono/Flux and how to do this?
How my RestController method should look like?
Here is my simple implementation which achieves the same using Servlet 3.1 API
//took the idea from some Jetty examples
public class AsyncRestServlet extends HttpServlet {
...
#Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
String result = (String) req.getAttribute(RESULTS_ATTR);
if (result == null) { //data not ready yet: schedule async processing
final AsyncContext async = req.startAsync();
//generate some unique request ID
String uid = "req-" + String.valueOf(req.hashCode());
//share it to Kafka receive together with AsyncContext
//when Kafka receiver will get the response it will put it in Servlet request attribute and call async.dispatch()
//This doGet() method will be called again and it will send the response to client
receiver.rememberKey(uid, async);
//send request to Kafka
sender.send(uid, param);
//data is not ready yet so we are releasing Servlet thread
return;
}
//return result as html response
resp.setContentType("text/html");
PrintWriter out = resp.getWriter();
out.println(result);
out.close();
}
Here's a short example - Not the WebFlux client you probably had in mind, but at least it would enable you to utilize Flux and Mono for asynchronous processing, which I interpreted to be the point of your question. The web objects should work without additional configurations, but of course you will need to configure Kafka as the KafkaTemplate object will not work on its own.
#Bean // Using org.springframework.web.reactive.function.server.RouterFunction<ServerResponse>
public RouterFunction<ServerResponse> sendMessageToTopic(KafkaController kafkaController){
return RouterFunctions.route(RequestPredicates.POST("/endpoint"), kafkaController::sendMessage);
}
#Component
public class ResponseHandler {
public getServerResponse() {
return ServerResponse.ok().body(Mono.just(Status.SUCCESS), String.class);
}
}
#Component
public class KafkaController {
public Mono<ServerResponse> auditInvalidTransaction(ServerRequest request) {
return request.bodyToMono(TopicMsgMap.class)
// your HTTP call may not return immediately without this
.subscribeOn(Schedulers.single()) // for a single worker thread
.flatMap(topicMsgMap -> {
MyKafkaPublisher.sendMessages(topicMsgMap);
}.flatMap(responseHandler::getServerResponse);
}
}
#Data // model class just to easily convert the ServerRequest (from json, for ex.)
// + ~#constructors
public class TopicMsgMap() {
private Map<String, String> topicMsgMap;
}
#Service // Using org.springframework.kafka.core.KafkaTemplate<String, String>
public class MyKafkaPublisher {
#Autowired
private KafkaTemplate<String, String> template;
#Value("${topic1}")
private String topic1;
#Value("${topic2}")
private String topic2;
public void sendMessages(Map<String, String> topicMsgMap){
topicMsgMap.forEach((top, msg) -> {
if (topic.equals("topic1") kafkaTemplate.send(topic1, message);
if (topic.equals("topic2") kafkaTemplate.send(topic2, message);
});
}
}
Guessing this isn't the use-case you had in mind, but hope you find this general structure useful.
There is several approaches including KafkaReplyingRestTemplate for this problem but continuing your approach in servlet api's the solution will be something like this in spring Webflux.
Your Controller method looks like this:
#RequestMapping(path = "/completable-future", method = RequestMethod.POST)
Mono<Response> asyncTransaction(#RequestBody RequestDto requestDto, #RequestHeader Map<String, String> requestHeaders) {
String internalTransactionId = UUID.randomUUID().toString();
kafkaSender.send(Request.builder()
.transactionId(requestHeaders.get("transactionId"))
.internalTransactionId(internalTransactionId)
.sourceIban(requestDto.getSourceIban())
.destIban(requestDto.getDestIban())
.build());
CompletableFuture<Response> completableFuture = new CompletableFuture();
taskHolder.pushTask(completableFuture, internalTransactionId);
return Mono.fromFuture(completableFuture);
}
Your taskHolder component will be something like this:
#Component
public class TaskHolder {
private Map<String, CompletableFuture> taskHolder = new ConcurrentHashMap();
public void pushTask(CompletableFuture<Response> task, String transactionId) {
this.taskHolder.put(transactionId, task);
}
public Optional<CompletableFuture> remove(String transactionId) {
return Optional.ofNullable(this.taskHolder.remove(transactionId));
}
}
And finally your Kafka ResponseListener looks like this:
#Component
public class ResponseListener {
#Autowired
TaskHolder taskHolder;
#KafkaListener(topics = "reactive-response-topic", groupId = "test")
public void listen(Response response) {
taskHolder.remove(response.getInternalTransactionId()).orElse(
new CompletableFuture()).complete(response);
}
}
In this example I used internalTransactionId as CorrelationId but you can use "kafka_correlationId" that is a known kafka header.

Resources