Spring Integration not moving file after processing - spring

My requirement is to move files from input to output directory. Currently, I receive an XML file, parse it, process it and would like to move to new folder. I am using SPring boot 2.0, Spring INtegration 5. Attached is the code. This integration flows process the file but after processing it is not moving the file new directory.
Could you please let me know what is missing and how to fix this?
Logs are
2018-04-06 15:55:16.473[0;39m [32mDEBUG[0;39m [35m6364[0;39m [2m---[0;39m [2m[ask-scheduler-1][0;39m [36mo.s.i.handler.ServiceActivatingHandler [0;39m [2m:[0;39m handler 'ServiceActivator for [org.springframework.integration.handler.BeanNameMessageProcessor#33a55bd8] (org.springframework.integration.handler.ServiceActivatingHandler#0)' produced no reply for request Message: GenericMessage [payload=Producers {id: -2147483648, parent-id: 0}, headers={file_originalFile=C:\slim\OBDF\Entire_IMO_hierarchy.xml, id=3ee00fca-1f2b-be84-742a-b5c6edfaf42a, file_name=Entire_IMO_hierarchy.xml, file_relativePath=Entire_IMO_hierarchy.xml, timestamp=1523055316426}]
2018-04-06 15:55:16.475[0;39m [32mDEBUG[0;39m [35m6364[0;39m ---[0;39m [ask-scheduler-1][0;39m [36mo.s.integration.channel.DirectChannel [0;39m :[0;39m postSend (sent=true) on channel 'slimflow.channel#1', message: GenericMessage [payload=Producers {id: -2147483648, parent-id: 0}, headers={file_originalFile=C:\slim\OBDF\Entire_IMO_hierarchy.xml, id=3ee00fca-1f2b-be84-742a-b5c6edfaf42a, file_name=Entire_IMO_hierarchy.xml, file_relativePath=Entire_IMO_hierarchy.xml, timestamp=1523055316426}]
2018-04-06 15:55:16.480[0;39m [32mDEBUG[0;39m [35m6364[0;39m ---[0;39m [ask-scheduler-1][0;39m [36mo.s.integration.channel.DirectChannel [0;39m :[0;39m postSend (sent=true) on channel 'slimflow.channel#0', message: GenericMessage [payload=C:\slim\OBDF\Entire_IMO_hierarchy.xml, headers={file_originalFile=C:\slim\OBDF\Entire_IMO_hierarchy.xml, id=0f673954-bceb-6e64-0d47-639522002569, file_name=Entire_IMO_hierarchy.xml, file_relativePath=Entire_IMO_hierarchy.xml, timestamp=1523055316320}]
Integration flow config
import java.io.File;
import java.util.function.Function;
import javax.xml.bind.JAXBException;
import javax.xml.stream.XMLStreamException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.integration.channel.DirectChannel;
import org.springframework.integration.config.EnableIntegration;
import org.springframework.integration.core.MessageSource;
import org.springframework.integration.dsl.IntegrationFlow;
import org.springframework.integration.dsl.IntegrationFlows;
import org.springframework.integration.dsl.Pollers;
import org.springframework.integration.file.FileReadingMessageSource;
import org.springframework.integration.file.FileWritingMessageHandler;
import org.springframework.integration.file.filters.AcceptOnceFileListFilter;
import org.springframework.integration.file.filters.ChainFileListFilter;
import org.springframework.integration.file.filters.RegexPatternFileListFilter;
import org.springframework.integration.transformer.PayloadTypeConvertingTransformer;
import org.springframework.messaging.MessageHandler;
#Configuration
#EnableIntegration
public class SlimIntegrationConfig {
#Value("${input.directory}")
private String inputDir;
#Value("${outputDir.directory}")
private String outputDir;
#Value("${input.scan.frequency: 100000}")
private long scanFrequency;
#Autowired
private XmlBeanExtractor<Producers> xmlBeanExtractor;
#Bean
public MessageSource<File> inputFileSource() {
FileReadingMessageSource src = new FileReadingMessageSource(
(f1, f2) -> Long.valueOf(f1.lastModified()).compareTo(f2.lastModified()));
src.setDirectory(new File(inputDir));
src.setAutoCreateDirectory(true);
ChainFileListFilter<File> chainFileListFilter = new ChainFileListFilter<>();
chainFileListFilter.addFilter(new AcceptOnceFileListFilter<>() );
chainFileListFilter.addFilter(new RegexPatternFileListFilter("(?i)^.+\\.xml$"));
src.setFilter(chainFileListFilter);
return src;
}
#Bean
public DirectChannel outputChannel() {
return new DirectChannel();
}
#Bean
public MessageHandler fileOutboundChannelAdapter() {
FileWritingMessageHandler adapter = new FileWritingMessageHandler(new File(outputDir));
adapter.setDeleteSourceFiles(true);
adapter.setAutoCreateDirectory(true);
adapter.setExpectReply(false);
return adapter;
}
#Bean
PayloadTypeConvertingTransformer<File, Producers> xmlBeanTranformer() {
PayloadTypeConvertingTransformer<File, Producers> tranformer = new PayloadTypeConvertingTransformer<>();
tranformer.setConverter(file -> {
Producers p = null;
try {
p = xmlBeanExtractor.extract(file.getAbsolutePath(), Producers.class);
} catch (JAXBException | XMLStreamException e) {
e.printStackTrace();
}
return p;
});
return tranformer;
}
#Bean
public IntegrationFlow slimflow() {
return IntegrationFlows
.from(inputFileSource(), spec -> spec.poller(Pollers.fixedDelay(scanFrequency)))
.transform(xmlBeanTranformer())
.handle("slimFileProcessor","processfile")
.channel(outputChannel())
.handle(fileOutboundChannelAdapter())
.get()
;
}
}

We need top know what your slimFileProcessor.processfile() does. However it doesn't reflect what you do in the xmlBeanTranformer. You convert there a File payload to the Producers object and exactly this one is sent to the slimFileProcessor.
So, that's first: there is no File in the payload for the FileWritingMessageHandler. But we can fix it a bit later.
Now you have a log like:
ServiceActivatingHandler#0)' produced no reply for request
So, your slimFileProcessor doesn't return something to be sent to the outputChannel() for potential file move from one directory to another.
If return something isn't possible by the logic at all, you can consider to use a .publishSubscribeChannel(). Make that xmlBeanTranformer() as a one subscriber and fileOutboundChannelAdapter() as another. This way the same File object will be sent to two branches. Only the point that the second branch won't be called until the first one finishes its work. Of course, if everything is done in the same thread.
You still can live with a simple linear flow, just because you get a gain of the FileHeaders.ORIGINAL_FILE header which is going to be used in the FileWritingMessageHandler. But you should keep in mind that last one supports only these types for request message payload: File, InputStream, byte[] or String. For your move after process use-case, of course, it would be better to deal with the File type. That's why I suggest to consider publish-subscribe variant.

Related

aggregator spring cloud stream with timeout

I want to make an application that receives messages, stores those messages in a list, and later with and schedule releases those messages every x amount of time.
I know spring cloud stream has an aggregator that already does this, but I think I need it to be done manually because I need to keep a unique message based upon a key and only replace the old message if it matches a specific condition ( I think of it as a Set aggregator with conditions)
what I have tried so far.
also in this link https://github.com/chalimbu/AggregatorQuestionStack
Processor.
import org.springframework.cloud.stream.annotation.EnableBinding
import org.springframework.cloud.stream.annotation.Input
import org.springframework.cloud.stream.annotation.Output
import org.springframework.cloud.stream.messaging.Processor
import org.springframework.scheduling.annotation.Scheduled
#EnableBinding(Processor::class)
class SetAggregatorProcessor(val storageService: StorageService) {
#Input
public fun inputMessage(input: Map<String,Any>){
storageService.messages.add(input)
}
#Output
#Scheduled(fixedDelay = 20000)
public fun produceOutput():List<Map<String,Any>>{
val message= storageService.messages
storageService.messages.clear()
return message;
}
}
Memory storage.
import org.springframework.stereotype.Service
#Service
class StorageService {
public var messages: MutableList<Map<String,Any>> = mutableListOf()
}
This code generates the following error when I start pushing messages.
Caused by: org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:139) ~[spring-integration-core-5.5.8.jar:5.5.8]
The idea is to deploy this app as part of the spring cloud stream (dataflow) platform.
I prefer the declarative approach(over the functional approach), but if somebody knows how to do it with the reactor way, I could settle for it.
Thanks for any help or advice.
thanks to this example(https://github.com/spring-cloud/spring-cloud-stream-samples/blob/main/processor-samples/sensor-average-reactive-kafka/src/main/java/sample/sensor/average/SensorAverageProcessorApplication.java) I was able to figure something out using flux in case someone else needs it
#Configuration
class SetAggregatorProcessor : Function<Flux<Map<String, Any>>, Flux<MutableList<Map<String, Any>>>> {
override fun apply(data: Flux<Map<String, Any>>):Flux<MutableList<Map<String, Any>>> {
return data.window(Duration.ofSeconds(20)).flatMap { window: Flux<Map<String, Any>> ->
this.aggregateList(window)
}
}
private fun aggregateList(group: Flux<Map<String, Any>>): Mono<MutableList<Map<String, Any>>>? {
return group.reduce(
mutableListOf(),
BiFunction<MutableList<Map<String, Any>>, Map<String, Any>, MutableList<Map<String, Any>>> {
acumulator: MutableList<Map<String, Any>>, element: Map<String, Any> ->
acumulator.add(element)
acumulator
}
)
}
}
update https://github.com/chalimbu/AggregatorQuestionStack/tree/main/src/main/kotlin/com/project/co/SetAggregator

throw not found exception if pubsub topic is not available

I am using spring boot to interact with pubsub topic.
My config class for this connection look like this:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cloud.gcp.pubsub.core.PubSubTemplate;
import org.springframework.cloud.gcp.pubsub.core.publisher.PubSubPublisherTemplate;
import org.springframework.cloud.gcp.pubsub.support.PublisherFactory;
import org.springframework.cloud.gcp.pubsub.support.converter.SimplePubSubMessageConverter;
import org.springframework.util.Assert;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.SettableListenableFuture;
import com.google.api.core.ApiFuture;
import com.google.api.core.ApiFutureCallback;
import com.google.api.core.ApiFutures;
import com.google.pubsub.v1.PubsubMessage;
public abstract class PubSubPublisher {
private static final Logger LOGGER = LoggerFactory.getLogger(PubSubPublisher.class);
private final PubSubTemplate pubSubTemplate;
protected PubSubPublisher(PubSubTemplate pubSubTemplate) {
this.pubSubTemplate = pubSubTemplate;
}
protected abstract String topic(String topicName);
public ListenableFuture<String> publish(String topicName, String message) {
LOGGER.info("Publishing to topic [{}]. Message: [{}]", topicName, message);
return pubSubTemplate.publish(topicName, message);
}
}
And I am calling this at my service, like this:
publisher.publish(topic-name, payload);
This publish method is async one, which always pass on did not wait for acknowldgrment. I make add get after publish for wait until it get the response from pubsub.
But I wanted to know if in case my topic is not already present and i try to push some message, it should throw some error like resource not found, considering using default async method only.
Might be implementing the callback would help but i am unable to do that in my code. And the current override publish method which use callback is just throwing the WARN not exception i wanted that to be exception. that is the reason i wanted to implement the callback.
You can check if Topic already present
from google.cloud import pubsub_v1
project_id = "projectname"
topic_name = "unknowTopic"
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path(project_id, topic_name)
try:
response = publisher.get_topic(topic_path)
except Exception as e:
print(e)
This returns the error as
404 Resource not found (resource=unknowTopic).

Spring-Integration: Tcp Server Response not sent on Exception

I migrated a legacy tcp server code into spring-boot and added spring-intergration (annotation based) dependencies to handle tcp socket connections.
My inbound Channel is tcpIn() , outbound Channel is serviceChannel() and i have created a custom Channel [ exceptionEventChannel() ] to hold exception event messages.
I have a custom serializer/Deserialier method (ByteArrayLengthPrefixSerializer() extends AbstractPooledBufferByteArraySerializer), and a MessageHandler #ServiceActivator method to send response back to tcp client.
//SpringBoot 2.0.3.RELEASE, Spring Integration 5.0.6.RELEASE
package com.test.config;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.context.ApplicationEvent;
import org.springframework.context.ApplicationListener;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.event.EventListener;
import org.springframework.integration.annotation.IntegrationComponentScan;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.integration.annotation.Transformer;
import org.springframework.integration.channel.DirectChannel;
import org.springframework.integration.event.inbound.ApplicationEventListeningMessageProducer;
import org.springframework.integration.ip.IpHeaders;
import org.springframework.integration.ip.tcp.TcpReceivingChannelAdapter;
import org.springframework.integration.ip.tcp.TcpSendingMessageHandler;
import org.springframework.integration.ip.tcp.connection.*;
import org.springframework.integration.ip.tcp.serializer.TcpDeserializationExceptionEvent;
import org.springframework.integration.router.ErrorMessageExceptionTypeRouter;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.MessageHandlingException;
import org.springframework.messaging.MessagingException;
import java.io.IOException;
#Configuration
#IntegrationComponentScan
public class TcpConfiguration {
#SuppressWarnings("unused")
#Value("${tcp.connection.port}")
private int tcpPort;
#Bean
TcpConnectionEventListener customerTcpListener() {
return new TcpConnectionEventListener();
}
#Bean
public MessageChannel tcpIn() {
return new DirectChannel();
}
#Bean
public MessageChannel serviceChannel() {
return new DirectChannel();
}
#ConditionalOnMissingBean(name = "errorChannel")
#Bean
public MessageChannel errorChannel() {
return new DirectChannel();
}
#Bean
public MessageChannel exceptionEventChannel() {
return new DirectChannel();
}
#Bean
public ByteArrayLengthPrefixSerializer byteArrayLengthPrefixSerializer() {
ByteArrayLengthPrefixSerializer byteArrayLengthPrefixSerializer = new ByteArrayLengthPrefixSerializer();
byteArrayLengthPrefixSerializer.setMaxMessageSize(98304); //max allowed size set to 96kb
return byteArrayLengthPrefixSerializer;
}
#Bean
public AbstractServerConnectionFactory tcpNetServerConnectionFactory() {
TcpNetServerConnectionFactory tcpServerCf = new TcpNetServerConnectionFactory(tcpPort);
tcpServerCf.setSerializer(byteArrayLengthPrefixSerializer());
tcpServerCf.setDeserializer(byteArrayLengthPrefixSerializer());
return tcpServerCf;
}
#Bean
public TcpReceivingChannelAdapter tcpReceivingChannelAdapter() {
TcpReceivingChannelAdapter adapter = new TcpReceivingChannelAdapter();
adapter.setConnectionFactory(tcpNetServerConnectionFactory());
adapter.setOutputChannel(tcpIn());
adapter.setErrorChannel(exceptionEventChannel());
return adapter;
}
#ServiceActivator(inputChannel = "exceptionEventChannel", outputChannel = "serviceChannel")
public String handle(Message<MessagingException> msg) {
//String unfilteredMessage = new String(byteMessage, StandardCharsets.US_ASCII);
System.out.println("-----------------EXCEPTION ==> " + msg);
return msg.toString();
}
#Transformer(inputChannel = "errorChannel", outputChannel = "serviceChannel")
public String transformer(String msg) {
//String unfilteredMessage = new String(byteMessage, StandardCharsets.US_ASCII);
System.out.println("-----------------ERROR ==> " + msg);
return msg.toString();
}
#ServiceActivator(inputChannel = "serviceChannel")
#Bean
public TcpSendingMessageHandler out(AbstractServerConnectionFactory cf) {
TcpSendingMessageHandler tcpSendingMessageHandler = new TcpSendingMessageHandler();
tcpSendingMessageHandler.setConnectionFactory(cf);
return tcpSendingMessageHandler;
}
#Bean
public ApplicationListener<TcpDeserializationExceptionEvent> listener() {
return new ApplicationListener<TcpDeserializationExceptionEvent>() {
#Override
public void onApplicationEvent(TcpDeserializationExceptionEvent tcpDeserializationExceptionEvent) {
exceptionEventChannel().send(MessageBuilder.withPayload(tcpDeserializationExceptionEvent.getCause())
.build());
}
};
}
}
Messages in tcpIn() is sent to a #ServiceActivator method inside a separate #Component Class, which is structured like so :
#Component
public class TcpServiceActivator {
#Autowired
public TcpServiceActivator() {
}
#ServiceActivator(inputChannel = "tcpIn", outputChannel = "serviceChannel")
public String service(byte[] byteMessage) {
// Business Logic returns String Ack Response
}
I don't have issues running a success scenario. My Tcp TestClient gets Ack response as expected.
However, when i try to simulate an exception, say Deserializer Exception, The exception message is not sent back as a response to Tcp Client.
I can see my Application Listener getting TcpDeserializationExceptionEvent and sending the message to exceptionEventChannel. The #ServiceActivator method handle(Message msg) also prints my exception message. But it never reaches the breakpoints (in a debug mode) inside MessageHandler method out(AbstractServerConnectionFactory cf).
I am struggling to understand whats going wrong. Thanks for any help in advance.
UPDATE : I notice that the Socket is closed due to exception before the response can be sent. I'm trying to figure out a way around this
SOLUTION UPDATE (12th Mar 2019) :
Courtesy of Gary, i edited my deserializer to return a message that can be traced by a #Router method and redirected to errorChannel. The ServiceActivator listening to errorchannel then sends the desired error message to outputChannel . This solution seems to work.
My deserializer method inside ByteArrayLengthPrefixSerializer returning a "special value" as Gary recommended, instead of the original inputStream message.
public byte[] doDeserialize(InputStream inputStream, byte[] buffer) throws IOException {
boolean isValidMessage = false;
try {
int messageLength = this.readPrefix(inputStream);
if (messageLength > 0 && fillUntilMaxDeterminedSize(inputStream, buffer, messageLength)) {
return this.copyToSizedArray(buffer, messageLength);
}
return EventType.MSG_INVALID.getName().getBytes();
} catch (SoftEndOfStreamException eose) {
return EventType.MSG_INVALID.getName().getBytes();
}
}
I also made a few new channels to accommodate my Router such that the flow is as follows :
Success flow
tcpIn (#Router) -> serviceChannel(#serviceActivator that holds business logic) -> outputChannel (#serviceActivator that sends response to client)
Exception flow
tcpIn (#Router) -> errorChannel(#serviceActivator that prepares the error Response message) -> outputChannel (#serviceActivator that sends response to client)
My #Router and 'errorHandling' #serviceActivator -
#Router(inputChannel = "tcpIn", defaultOutputChannel = "errorChannel")
public String messageRouter(byte[] byteMessage) {
String unfilteredMessage = new String(byteMessage, StandardCharsets.US_ASCII);
System.out.println("------------------> "+unfilteredMessage);
if (Arrays.equals(EventType.MSG_INVALID.getName().getBytes(), byteMessage)) {
return "errorChannel";
}
return "serviceChannel";
}
#ServiceActivator(inputChannel = "errorChannel", outputChannel = "outputChannel")
public String errorHandler(byte[] byteMessage) {
return Message.ACK_RETRY;
}
The error channel is for handling exceptions that occur while processing a message. Deserialization errors occur before a message is created (the deserializer decodes the payload for the message).
Deserialization exceptions are fatal and, as you have observed, the socket is closed.
One option would be to catch the exception in the deserializer and return a "special" value that indicates a deserialization exception occurred, then check for that value in your main flow.

is putting sqs-consumer to detect receiveMessage event in sqs scalable

I am using aws sqs as message queue. After sqs.sendMessage sends the data , I want to detect sqs.receiveMessage via either infinite loop or event triggering in scalable way. Then I came accross sqs-consumer
to handle sqs.receiveMessage events, the moment it receives the messages. But I was wondering , is it the most suitable way to handle message passing between microservices or is there any other better way to handle this thing?
I had written the code in java for fetching the data from sqs queue with SQSBufferedAsyncClient, advantages using this API is buffered the messages in async mode.
/**
*
*/
package com.sxm.aota.tsc.config;
import java.net.UnknownHostException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonWebServiceRequest;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.InstanceProfileCredentialsProvider;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.retry.RetryPolicy;
import com.amazonaws.retry.RetryPolicy.BackoffStrategy;
import com.amazonaws.services.sqs.AmazonSQSAsync;
import com.amazonaws.services.sqs.AmazonSQSAsyncClient;
import com.amazonaws.services.sqs.buffered.AmazonSQSBufferedAsyncClient;
import com.amazonaws.services.sqs.buffered.QueueBufferConfig;
#Configuration
public class SQSConfiguration {
/** The properties cache config. */
#Autowired
private PropertiesCacheConfig propertiesCacheConfig;
#Bean
public AmazonSQSAsync amazonSQSClient() {
// Create Client Configuration
ClientConfiguration clientConfig = new ClientConfiguration()
.withMaxErrorRetry(5)
.withConnectionTTL(10_000L)
.withTcpKeepAlive(true)
.withRetryPolicy(new RetryPolicy(
null,
new BackoffStrategy() {
#Override
public long delayBeforeNextRetry(AmazonWebServiceRequest req,
AmazonClientException exception, int retries) {
// Delay between retries is 10s unless it is UnknownHostException
// for which retry is 60s
return exception.getCause() instanceof UnknownHostException ? 60_000L : 10_000L;
}
}, 10, true));
// Create Amazon client
AmazonSQSAsync asyncSqsClient = null;
if (propertiesCacheConfig.isIamRole()) {
asyncSqsClient = new AmazonSQSAsyncClient(new InstanceProfileCredentialsProvider(true), clientConfig);
} else {
asyncSqsClient = new AmazonSQSAsyncClient(
new BasicAWSCredentials("sceretkey", "accesskey"));
}
final Regions regions = Regions.fromName(propertiesCacheConfig.getRegionName());
asyncSqsClient.setRegion(Region.getRegion(regions));
asyncSqsClient.setEndpoint(propertiesCacheConfig.getEndPoint());
// Buffer for request batching
final QueueBufferConfig bufferConfig = new QueueBufferConfig();
// Ensure visibility timeout is maintained
bufferConfig.setVisibilityTimeoutSeconds(20);
// Enable long polling
bufferConfig.setLongPoll(true);
// Set batch parameters
// bufferConfig.setMaxBatchOpenMs(500);
// Set to receive messages only on demand
// bufferConfig.setMaxDoneReceiveBatches(0);
// bufferConfig.setMaxInflightReceiveBatches(0);
return new AmazonSQSBufferedAsyncClient(asyncSqsClient, bufferConfig);
}
}
then written the scheduleR which executes after every 2 secs and fetches the data from queue, process it and delete it from queue before visibility timeout otherwise it will be ready for processing again when visibility tiiimeout expires again.
package com.sxm.aota.tsc.sqs;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import javax.annotation.PostConstruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.DependsOn;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import com.amazonaws.services.sqs.AmazonSQSAsync;
import com.amazonaws.services.sqs.model.DeleteMessageRequest;
import com.amazonaws.services.sqs.model.GetQueueUrlRequest;
import com.amazonaws.services.sqs.model.GetQueueUrlResult;
import com.amazonaws.services.sqs.model.ReceiveMessageRequest;
import com.amazonaws.services.sqs.model.ReceiveMessageResult;
import com.fasterxml.jackson.databind.ObjectMapper;
/**
* The Class TSCDataSenderScheduledTask.
*
* Sends the aggregated Vehicle data to TSC in batches
*/
#EnableScheduling
#Component("sqsScheduledTask")
#DependsOn({ "propertiesCacheConfig", "amazonSQSClient" })
public class SQSScheduledTask {
private static final Logger LOGGER = LoggerFactory.getLogger(SQSScheduledTask.class);
#Autowired
private PropertiesCacheConfig propertiesCacheConfig;
#Autowired
public AmazonSQSAsync amazonSQSClient;
/**
* Timer Task that will run after specific interval of time Majorly
* responsible for sending the data in batches to TSC.
*/
private String queueUrl;
private final ObjectMapper mapper = new ObjectMapper();
#PostConstruct
public void initialize() throws Exception {
LOGGER.info("SQS-Publisher", "Publisher initializing for queue " + propertiesCacheConfig.getSQSQueueName(),
"Publisher initializing for queue " + propertiesCacheConfig.getSQSQueueName());
// Get queue URL
final GetQueueUrlRequest request = new GetQueueUrlRequest().withQueueName(propertiesCacheConfig.getSQSQueueName());
final GetQueueUrlResult response = amazonSQSClient.getQueueUrl(request);
queueUrl = response.getQueueUrl();
LOGGER.info("SQS-Publisher", "Publisher initialized for queue " + propertiesCacheConfig.getSQSQueueName(),
"Publisher initialized for queue " + propertiesCacheConfig.getSQSQueueName() + ", URL = " + queueUrl);
}
#Scheduled(fixedDelayString = "${sqs.consumer.delay}")
public void timerTask() {
final ReceiveMessageResult receiveResult = getMessagesFromSQS();
String messageBody = null;
if (receiveResult != null && receiveResult.getMessages() != null && !receiveResult.getMessages().isEmpty()) {
try {
messageBody = receiveResult.getMessages().get(0).getBody();
String messageReceiptHandle = receiveResult.getMessages().get(0).getReceiptHandle();
Vehicles vehicles = mapper.readValue(messageBody, Vehicles.class);
processMessage(vehicles.getVehicles(),messageReceiptHandle);
} catch (Exception e) {
LOGGER.error("Exception while processing SQS message : {}", messageBody);
// Message is not deleted on SQS and will be processed again after visibility timeout
}
}
}
public void processMessage(List<Vehicle> vehicles,String messageReceiptHandle) throws InterruptedException {
//processing code
//delete the sqs message as the processing is completed
//Need to create atomic counter that will be increamented by all TS.. Once it will be 0 then we will be deleting the messages
amazonSQSClient.deleteMessage(new DeleteMessageRequest(queueUrl, messageReceiptHandle));
}
private ReceiveMessageResult getMessagesFromSQS() {
try {
// Create new request and fetch data from Amazon SQS queue
final ReceiveMessageResult receiveResult = amazonSQSClient
.receiveMessage(new ReceiveMessageRequest().withMaxNumberOfMessages(1).withQueueUrl(queueUrl));
return receiveResult;
} catch (Exception e) {
LOGGER.error("Error while fetching data from SQS", e);
}
return null;
}
}

How to register my custom MessageBodyReader in my CLIENT?

Maybe somebody can help me find out how to solve this.
I am using jersey-apache-client 1.17
I tried to use Jersey client to build a standalone application (no Servlet container or whatever, just the Java classes) which communicates with a RESTFUL API, and everything worked fine until I tried to handle the mediatype "text/csv; charset=utf-8" which is a CSV stream sent by the server.
The thing is that I can read this stream with the following code:
InputStreamReader reader = new InputStreamReader(itemExportBuilder
.get(ClientResponse.class).getEntityInputStream());
Csv csv = new Csv();
Input input = csv.createInput(reader);
try {
String[] readLine;
while ((readLine = input.readLine()) != null) {
LOG.debug("Reading CSV: {}", readLine);
}
} catch (IOException e) {
e.printStackTrace();
}
try {
input.close();
} catch (IOException e) {
e.printStackTrace();
}
But I'd like to encapsulate it and put it into a MessageBodyReader. But after writing this code, I just can't make the client use the following class:
package client.response;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.lang.annotation.Annotation;
import java.lang.reflect.Type;
import java.util.ArrayList;
import java.util.List;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.MultivaluedMap;
import javax.ws.rs.ext.MessageBodyReader;
import javax.ws.rs.ext.Provider;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
#Provider
public class ItemExportMessageBodyReader implements MessageBodyReader<ItemExportResponse> {
private static final Logger LOG = LoggerFactory.getLogger(ItemExportMessageBodyReader.class);
private static final Integer SKU = 0;
private static final Integer BASE_SKU = 1;
public boolean isReadable(Class<?> paramClass, Type type, Annotation[] annotations,
MediaType mediaType) {
LOG.info("Cheking if content is readable or not");
return paramClass == ItemExportResponse.class && !mediaType.isWildcardType()
&& !mediaType.isWildcardSubtype()
&& mediaType.isCompatible(MediaType.valueOf("text/csv; charset=utf-8"));
}
public ItemExportResponse readFrom(Class<ItemExportResponse> paramClass, Type paramType,
Annotation[] paramArrayOfAnnotation, MediaType paramMediaType,
MultivaluedMap<String, String> paramMultivaluedMap, InputStream entityStream)
throws IOException, WebApplicationException {
InputStreamReader reader = new InputStreamReader(entityStream);
Csv csv = new Csv();
Input input = csv.createInput(reader);
List<Item> items = new ArrayList<Item>();
try {
String[] readLine;
while ((readLine = input.readLine()) != null) {
LOG.trace("Reading CSV: {}", readLine);
Item item = new Item();
item.setBaseSku(readLine[BASE_SKU]);
items.add(item);
}
} catch (IOException e) {
LOG.warn("Item export HTTP response handling failed", e);
} finally {
try {
input.close();
} catch (IOException e) {
LOG.warn("Could not close the HTTP response stream", e);
}
}
ItemExportResponse response = new ItemExportResponse();
response.setItems(items);
return response;
}
}
The following documentation says that the preferred way of making this work in a JAX-RS client to register the message body reader with the code below:
Using Entity Providers with JAX-RS Client API
Client client = ClientBuilder.newBuilder().register(MyBeanMessageBodyReader.class).build();
Response response = client.target("http://example/comm/resource").request(MediaType.APPLICATION_XML).get();
System.out.println(response.getStatus());
MyBean myBean = response.readEntity(MyBean.class);
System.out.println(myBean);
Now the thing is that I can't use the ClientBuilder. I have to extend from a specific class which constructs the client another way, and I have no access to change the construction.
So when I receive the response from the server, the client fails with the following Exception:
com.sun.jersey.api.client.ClientHandlerException: A message body reader for Java class client.response.ItemExportResponse, and Java type class client.response.ItemExportResponse, and MIME media type text/csv; charset=utf-8 was not found
Any other way to register my MessageBodyReader?
OK. If anybody would bump into my question I solved this mystery by upgrading from Jersey 1.17 to version 2.9. The documentation I linked above also covers this version not the old one, this is where the confusion stems from.
Jersey introduced backward INCOMPATIBLE changes starting from version 2, so I have no clue how to configure it in version 1.17.
In version 2 the proposed solution worked fine.

Resources