is putting sqs-consumer to detect receiveMessage event in sqs scalable - microservices

I am using aws sqs as message queue. After sqs.sendMessage sends the data , I want to detect sqs.receiveMessage via either infinite loop or event triggering in scalable way. Then I came accross sqs-consumer
to handle sqs.receiveMessage events, the moment it receives the messages. But I was wondering , is it the most suitable way to handle message passing between microservices or is there any other better way to handle this thing?

I had written the code in java for fetching the data from sqs queue with SQSBufferedAsyncClient, advantages using this API is buffered the messages in async mode.
/**
*
*/
package com.sxm.aota.tsc.config;
import java.net.UnknownHostException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonWebServiceRequest;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.auth.InstanceProfileCredentialsProvider;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.retry.RetryPolicy;
import com.amazonaws.retry.RetryPolicy.BackoffStrategy;
import com.amazonaws.services.sqs.AmazonSQSAsync;
import com.amazonaws.services.sqs.AmazonSQSAsyncClient;
import com.amazonaws.services.sqs.buffered.AmazonSQSBufferedAsyncClient;
import com.amazonaws.services.sqs.buffered.QueueBufferConfig;
#Configuration
public class SQSConfiguration {
/** The properties cache config. */
#Autowired
private PropertiesCacheConfig propertiesCacheConfig;
#Bean
public AmazonSQSAsync amazonSQSClient() {
// Create Client Configuration
ClientConfiguration clientConfig = new ClientConfiguration()
.withMaxErrorRetry(5)
.withConnectionTTL(10_000L)
.withTcpKeepAlive(true)
.withRetryPolicy(new RetryPolicy(
null,
new BackoffStrategy() {
#Override
public long delayBeforeNextRetry(AmazonWebServiceRequest req,
AmazonClientException exception, int retries) {
// Delay between retries is 10s unless it is UnknownHostException
// for which retry is 60s
return exception.getCause() instanceof UnknownHostException ? 60_000L : 10_000L;
}
}, 10, true));
// Create Amazon client
AmazonSQSAsync asyncSqsClient = null;
if (propertiesCacheConfig.isIamRole()) {
asyncSqsClient = new AmazonSQSAsyncClient(new InstanceProfileCredentialsProvider(true), clientConfig);
} else {
asyncSqsClient = new AmazonSQSAsyncClient(
new BasicAWSCredentials("sceretkey", "accesskey"));
}
final Regions regions = Regions.fromName(propertiesCacheConfig.getRegionName());
asyncSqsClient.setRegion(Region.getRegion(regions));
asyncSqsClient.setEndpoint(propertiesCacheConfig.getEndPoint());
// Buffer for request batching
final QueueBufferConfig bufferConfig = new QueueBufferConfig();
// Ensure visibility timeout is maintained
bufferConfig.setVisibilityTimeoutSeconds(20);
// Enable long polling
bufferConfig.setLongPoll(true);
// Set batch parameters
// bufferConfig.setMaxBatchOpenMs(500);
// Set to receive messages only on demand
// bufferConfig.setMaxDoneReceiveBatches(0);
// bufferConfig.setMaxInflightReceiveBatches(0);
return new AmazonSQSBufferedAsyncClient(asyncSqsClient, bufferConfig);
}
}
then written the scheduleR which executes after every 2 secs and fetches the data from queue, process it and delete it from queue before visibility timeout otherwise it will be ready for processing again when visibility tiiimeout expires again.
package com.sxm.aota.tsc.sqs;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import javax.annotation.PostConstruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.DependsOn;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import com.amazonaws.services.sqs.AmazonSQSAsync;
import com.amazonaws.services.sqs.model.DeleteMessageRequest;
import com.amazonaws.services.sqs.model.GetQueueUrlRequest;
import com.amazonaws.services.sqs.model.GetQueueUrlResult;
import com.amazonaws.services.sqs.model.ReceiveMessageRequest;
import com.amazonaws.services.sqs.model.ReceiveMessageResult;
import com.fasterxml.jackson.databind.ObjectMapper;
/**
* The Class TSCDataSenderScheduledTask.
*
* Sends the aggregated Vehicle data to TSC in batches
*/
#EnableScheduling
#Component("sqsScheduledTask")
#DependsOn({ "propertiesCacheConfig", "amazonSQSClient" })
public class SQSScheduledTask {
private static final Logger LOGGER = LoggerFactory.getLogger(SQSScheduledTask.class);
#Autowired
private PropertiesCacheConfig propertiesCacheConfig;
#Autowired
public AmazonSQSAsync amazonSQSClient;
/**
* Timer Task that will run after specific interval of time Majorly
* responsible for sending the data in batches to TSC.
*/
private String queueUrl;
private final ObjectMapper mapper = new ObjectMapper();
#PostConstruct
public void initialize() throws Exception {
LOGGER.info("SQS-Publisher", "Publisher initializing for queue " + propertiesCacheConfig.getSQSQueueName(),
"Publisher initializing for queue " + propertiesCacheConfig.getSQSQueueName());
// Get queue URL
final GetQueueUrlRequest request = new GetQueueUrlRequest().withQueueName(propertiesCacheConfig.getSQSQueueName());
final GetQueueUrlResult response = amazonSQSClient.getQueueUrl(request);
queueUrl = response.getQueueUrl();
LOGGER.info("SQS-Publisher", "Publisher initialized for queue " + propertiesCacheConfig.getSQSQueueName(),
"Publisher initialized for queue " + propertiesCacheConfig.getSQSQueueName() + ", URL = " + queueUrl);
}
#Scheduled(fixedDelayString = "${sqs.consumer.delay}")
public void timerTask() {
final ReceiveMessageResult receiveResult = getMessagesFromSQS();
String messageBody = null;
if (receiveResult != null && receiveResult.getMessages() != null && !receiveResult.getMessages().isEmpty()) {
try {
messageBody = receiveResult.getMessages().get(0).getBody();
String messageReceiptHandle = receiveResult.getMessages().get(0).getReceiptHandle();
Vehicles vehicles = mapper.readValue(messageBody, Vehicles.class);
processMessage(vehicles.getVehicles(),messageReceiptHandle);
} catch (Exception e) {
LOGGER.error("Exception while processing SQS message : {}", messageBody);
// Message is not deleted on SQS and will be processed again after visibility timeout
}
}
}
public void processMessage(List<Vehicle> vehicles,String messageReceiptHandle) throws InterruptedException {
//processing code
//delete the sqs message as the processing is completed
//Need to create atomic counter that will be increamented by all TS.. Once it will be 0 then we will be deleting the messages
amazonSQSClient.deleteMessage(new DeleteMessageRequest(queueUrl, messageReceiptHandle));
}
private ReceiveMessageResult getMessagesFromSQS() {
try {
// Create new request and fetch data from Amazon SQS queue
final ReceiveMessageResult receiveResult = amazonSQSClient
.receiveMessage(new ReceiveMessageRequest().withMaxNumberOfMessages(1).withQueueUrl(queueUrl));
return receiveResult;
} catch (Exception e) {
LOGGER.error("Error while fetching data from SQS", e);
}
return null;
}
}

Related

Invalid Address JMSException when using temporary credentials with SQSSession

I am getting an error trying to connect to an SQS queue in another AWS account using JMS. I have tried to follow the approach taken in this answer, but I am receiving the following error:
com.amazonaws.services.sqs.model.AmazonSQSException: The address https://sqs.us-east-1.amazonaws.com/ is not valid for this endpoint. (Service: AmazonSQS; Status Code: 404; Error Code: InvalidAddress; Request ID: d7f72bd3-6240-5f63-b313-70c2d8978c14; Proxy: null)
Unlike in the post mentioned above (which I believe has the account credentials in the default provider chain?) I am trying to assume a role that has access to this SQS queue. Is this not possible through JMS or am I doing something incorrectly?
import com.amazon.sqs.javamessaging.ProviderConfiguration;
import com.amazon.sqs.javamessaging.SQSConnectionFactory;
import com.amazon.sqs.javamessaging.SQSSession;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.securitytoken.AWSSecurityTokenService;
import com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient;
import com.amazonaws.services.sqs.AmazonSQSClientBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.jms.config.DefaultJmsListenerContainerFactory;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.jms.support.destination.DynamicDestinationResolver;
import javax.jms.ConnectionFactory;
import javax.jms.JMSException;
import javax.jms.Queue;
import javax.jms.Session;
/**
* A configuration class for JMS to poll an SQS queue
* in another AWS account
*/
#Configuration
public class TranslationJmsConfig {
private static final Logger LOGGER = LoggerFactory.getLogger(TranslationJmsConfig.class);
#Value("${iam.connection.arn}")
private String connectionRoleArn;
#Value("${account.id}")
private String brokerAccountId;
/**
* JmsListenerContainerFactory bean for translation processing response queue
*
* #param concurrentConsumers number of concurrent consumers
* #param maxConcurrentConsumers max number of concurrent consumers
* #return An instance of JmsListenerContainerFactory
*/
#Bean("translationJmsListenerContainerFactory")
public DefaultJmsListenerContainerFactory translationJmsListenerContainerFactory(
#Value("#{new Integer('${listener.concurrency}')}") int concurrentConsumers,
#Value("#{new Integer('${listener.max.concurrency}')}") int maxConcurrentConsumers) {
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(getConnectionFactory(connectionRoleArn));
factory.setDestinationResolver(new SqsDynamicDestinationResolver(brokerAccountId));
factory.setSessionTransacted(false); //SQS does not support transaction.
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE); // Automatic message acknowledgment after successful listener execution; best-effort redelivery in case of a user exception thrown as well as in case of other listener execution interruptions (such as the JVM dying).
factory.setConcurrency(String.format("%d-%d", concurrentConsumers, maxConcurrentConsumers));
return factory;
}
/**
* create custom JMS Template
* #return JmsTemplate
*/
#Bean
public JmsTemplate customJmsTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate(getConnectionFactory(connectionRoleArn));
jmsTemplate.setDestinationResolver(new SqsDynamicDestinationResolver(brokerAccountId));
return jmsTemplate;
}
/**
* A dynamic destination resolver for sqs queue
*/
public class SqsDynamicDestinationResolver extends DynamicDestinationResolver {
private final String brokerAccountId;
/**
* Constructor
* #param brokerAccountId broker Account Id
*/
public SqsDynamicDestinationResolver(String brokerAccountId) {
this.brokerAccountId = brokerAccountId;
}
#Override
protected Queue resolveQueue(Session session, String queueName) throws JMSException {
if (session instanceof SQSSession) {
SQSSession sqsSession = (SQSSession) session;
return sqsSession.createQueue(queueName, brokerAccountId); // 404 invalid address -- Something wrong with creds?
}
return super.resolveQueue(session, queueName);
}
}
private ConnectionFactory getConnectionFactory(String connectionRoleArn){
AWSSecurityTokenService stsClient = AWSSecurityTokenServiceClient.builder()
.build();
// assume the connector account credentials -> so we can assume customer account using chaining
AWSCredentialsProvider dummyCredentialProviders = IdentityHelpers.assumeInternalRole(stsClient, connectionRoleArn); // A helper that assumes temporary creds
return new SQSConnectionFactory(
new ProviderConfiguration(),
AmazonSQSClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(dummyCredentialProviders)
);
}
}
I realized that when using the temporary credentials, I didn't need the second parameter (the account id) of the sqsSession.createQueue call. so once i changed
sqsSession.createQueue(queueName, brokerAccountId);
To:
return sqsSession.createQueue(queueName);
it worked fine. I guess i missunderstood the need for the account id. I assume the parameter is used when you have multiple accounts in your providerChain and you want it to search a specific account? Any light on this would still be appreciated!

Call BigQuery stored procedure(Routine) using spring boot

I'm trying to call a Google BigQuery stored procedure (Routine) using Spring boot. I tried all the methods of the routines to extract data. However, it didn't help.
Has anyone ever created and called a BigQuery stored procedure (Routine) through the Spring boot? If so, how?
public static Boolean executeInsertQuery(String query, TableId tableId, String jobName) {
log.info("Starting {} truncate query", jobName);
BigQuery bigquery = GCPConfig.getBigQuery(); // bqClient
// query configuration
QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query)
.setUseLegacySql(false)
.setAllowLargeResults(true)
.setDestinationTable(tableId) .setWriteDisposition(JobInfo.WriteDisposition.WRITE_TRUNCATE).build();
try {
// build the query job.
QueryJob queryJob = new QueryJob.Builder(queryConfig).bigQuery(bigquery).jobName(jobName).build();
QueryJob.Result result = queryJob.execute();
} catch (JobException e) {
log.error("{} unsuccessful. job id: {}, job name: {}. exception: {}", jobName, e.getJobId(),
e.getJobName(), e.toString());
return false;
}
}
package ops.google.com;
import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryError;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.EncryptionConfiguration;
import com.google.cloud.bigquery.InsertAllRequest;
import com.google.cloud.bigquery.InsertAllResponse;
import com.google.cloud.bigquery.QueryJobConfiguration;
import com.google.cloud.bigquery.TableId;
import com.google.cloud.bigquery.TableResult;
import com.google.common.collect.ImmutableList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import com.google.auth.oauth2.GoogleCredentials;
import com.google.auth.oauth2.ServiceAccountCredentials;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public class SelectFromBigQueryFunction {
private static final Logger logger = LogManager.getLogger(SelectFromBigQueryFunction.class);
public boolean tableSelectFromJoin(String key_path) {
String projectID = "ProjectID";
String datasetName = "DataSetName";
String tableName1 = "sample_attribute_type";
String tableName2 = "sample_attribute_value";
boolean status = false;
try {
//Call BQ Function/Routines, functinon name->bq_function_name
//String query = String.format("SELECT DataSetName.bq_function_name(1, 1)");
//Call BQ Stored Procedure, procedure name-> bq_stored_procedure_name
String query = String.format("CALL DataSetName.bq_stored_procedure_name()");
File credentialsPath = new File(key_path);
FileInputStream serviceAccountStream = new FileInputStream(credentialsPath);
GoogleCredentials credentials = ServiceAccountCredentials.fromStream(serviceAccountStream);
// Initialize client that will be used to send requests. This client only needs to be created
BigQuery bigquery = BigQueryOptions.newBuilder()
.setProjectId(projectID)
.setCredentials(credentials)
.build().getService();
QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query).build();
TableResult results = bigquery.query(queryConfig);
results.iterateAll().forEach(row -> row.forEach(val -> System.out.printf("%s,", val.toString())));
logger.info("Query performed successfully with encryption key.");
status = true;
} catch (BigQueryException | InterruptedException e) {
logger.error("Query not performed \n" + e.toString());
}catch(Exception e){
logger.error("Some Exception \n" + e.toString());
}return status;
}
}

How to Understand if a Batch ended in a Batch To Record Adapter

I am developing a springboot application that reads messages from a topic. Messages are managed in transaction and read as string in batch mode and then deserialized to an object. This operation may fail but I don't want to discard all the batch but rather I would move failed messages to DLQ.
As I am using spring-kafka 2.6.5 I found out that I can use BatchToRecordAdapter in order to achieve this purpose. However I did not find out how to know when I am reading the last message of any batch.
I would like to read one message at a time, serialize it and then store in an ArrayList; when listener reads the last message I want to make some processing and finally commit the transaction.
Thanks,
Giuseppe.
UPDATE
In order to achieve this purpose I override BatchToRecordAdapter and added headers that allow me to know the position in a batch of every element.
package com.doxee.commons.lifecycle.kafka;
import java.util.List;
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.listener.ConsumerRecordRecoverer;
import org.springframework.kafka.listener.adapter.BatchToRecordAdapter;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.messaging.Message;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
/*
* Insert a description here.
*
* Bugs: none known
*
* #author gmiano gmiano#doxee.com
* #createDate 25/01/21
*
* Copyright (C) 2021 Doxee S.p.A. C.F. - P.IVA: IT02714390362. All Rights Reserved
*/
#Slf4j
public class BatchToEnrichedRecordAdapter<K, V> implements BatchToRecordAdapter<K, V> {
private final ConsumerRecordRecoverer recoverer;
public BatchToEnrichedRecordAdapter(ConsumerRecordRecoverer recoverer) {
Assert.notNull(recoverer, "'recoverer' cannot be null");
this.recoverer = recoverer;
}
#Override
public void adapt(List<Message<?>> messages, List<ConsumerRecord<K, V>> records,
Acknowledgment ack, Consumer<?, ?> consumer, Callback<K, V> callback) {
for (int i = 0; i < messages.size(); ++i) {
Message enrichedMessage = MessageBuilder.fromMessage(messages.get(i))
.setHeader(MyHeaders.BATCH_SIZE, messages.size())
.setHeader(MyHeaders.MESSAGE_BATCH_POSITION, i + 1)
.build();
try {
callback.invoke(records.get(i), ack, consumer, enrichedMessage);
} catch (Exception var9) {
this.recoverer.accept(records.get(i), var9);
}
}
}
}
with this bean as recoverer
#Bean
ConsumerRecordRecoverer recoverer(KafkaOperations<?, ?> template) {
return new DeadLetterPublishingRecoverer(template, (record, ex) -> {
String srcTopic = record.topic();
String srcKey = record.key().toString();
log.error("Failed consume of message {} from topic {}", srcKey, srcTopic, ex);
String dstTopic;
if (ex.getCause() instanceof ClientResumableException) {
dstTopic = srcTopic.concat(".RECOVERABLE");
} else {
dstTopic = srcTopic.concat(".DLT");
}
log.error("Cannot retry. Try to write message to topic: {}", dstTopic);
return new TopicPartition(dstTopic, 0);
});
}
Is this the proper solution?

Spring-Integration: Tcp Server Response not sent on Exception

I migrated a legacy tcp server code into spring-boot and added spring-intergration (annotation based) dependencies to handle tcp socket connections.
My inbound Channel is tcpIn() , outbound Channel is serviceChannel() and i have created a custom Channel [ exceptionEventChannel() ] to hold exception event messages.
I have a custom serializer/Deserialier method (ByteArrayLengthPrefixSerializer() extends AbstractPooledBufferByteArraySerializer), and a MessageHandler #ServiceActivator method to send response back to tcp client.
//SpringBoot 2.0.3.RELEASE, Spring Integration 5.0.6.RELEASE
package com.test.config;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.context.ApplicationEvent;
import org.springframework.context.ApplicationListener;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.event.EventListener;
import org.springframework.integration.annotation.IntegrationComponentScan;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.integration.annotation.Transformer;
import org.springframework.integration.channel.DirectChannel;
import org.springframework.integration.event.inbound.ApplicationEventListeningMessageProducer;
import org.springframework.integration.ip.IpHeaders;
import org.springframework.integration.ip.tcp.TcpReceivingChannelAdapter;
import org.springframework.integration.ip.tcp.TcpSendingMessageHandler;
import org.springframework.integration.ip.tcp.connection.*;
import org.springframework.integration.ip.tcp.serializer.TcpDeserializationExceptionEvent;
import org.springframework.integration.router.ErrorMessageExceptionTypeRouter;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.MessageHandlingException;
import org.springframework.messaging.MessagingException;
import java.io.IOException;
#Configuration
#IntegrationComponentScan
public class TcpConfiguration {
#SuppressWarnings("unused")
#Value("${tcp.connection.port}")
private int tcpPort;
#Bean
TcpConnectionEventListener customerTcpListener() {
return new TcpConnectionEventListener();
}
#Bean
public MessageChannel tcpIn() {
return new DirectChannel();
}
#Bean
public MessageChannel serviceChannel() {
return new DirectChannel();
}
#ConditionalOnMissingBean(name = "errorChannel")
#Bean
public MessageChannel errorChannel() {
return new DirectChannel();
}
#Bean
public MessageChannel exceptionEventChannel() {
return new DirectChannel();
}
#Bean
public ByteArrayLengthPrefixSerializer byteArrayLengthPrefixSerializer() {
ByteArrayLengthPrefixSerializer byteArrayLengthPrefixSerializer = new ByteArrayLengthPrefixSerializer();
byteArrayLengthPrefixSerializer.setMaxMessageSize(98304); //max allowed size set to 96kb
return byteArrayLengthPrefixSerializer;
}
#Bean
public AbstractServerConnectionFactory tcpNetServerConnectionFactory() {
TcpNetServerConnectionFactory tcpServerCf = new TcpNetServerConnectionFactory(tcpPort);
tcpServerCf.setSerializer(byteArrayLengthPrefixSerializer());
tcpServerCf.setDeserializer(byteArrayLengthPrefixSerializer());
return tcpServerCf;
}
#Bean
public TcpReceivingChannelAdapter tcpReceivingChannelAdapter() {
TcpReceivingChannelAdapter adapter = new TcpReceivingChannelAdapter();
adapter.setConnectionFactory(tcpNetServerConnectionFactory());
adapter.setOutputChannel(tcpIn());
adapter.setErrorChannel(exceptionEventChannel());
return adapter;
}
#ServiceActivator(inputChannel = "exceptionEventChannel", outputChannel = "serviceChannel")
public String handle(Message<MessagingException> msg) {
//String unfilteredMessage = new String(byteMessage, StandardCharsets.US_ASCII);
System.out.println("-----------------EXCEPTION ==> " + msg);
return msg.toString();
}
#Transformer(inputChannel = "errorChannel", outputChannel = "serviceChannel")
public String transformer(String msg) {
//String unfilteredMessage = new String(byteMessage, StandardCharsets.US_ASCII);
System.out.println("-----------------ERROR ==> " + msg);
return msg.toString();
}
#ServiceActivator(inputChannel = "serviceChannel")
#Bean
public TcpSendingMessageHandler out(AbstractServerConnectionFactory cf) {
TcpSendingMessageHandler tcpSendingMessageHandler = new TcpSendingMessageHandler();
tcpSendingMessageHandler.setConnectionFactory(cf);
return tcpSendingMessageHandler;
}
#Bean
public ApplicationListener<TcpDeserializationExceptionEvent> listener() {
return new ApplicationListener<TcpDeserializationExceptionEvent>() {
#Override
public void onApplicationEvent(TcpDeserializationExceptionEvent tcpDeserializationExceptionEvent) {
exceptionEventChannel().send(MessageBuilder.withPayload(tcpDeserializationExceptionEvent.getCause())
.build());
}
};
}
}
Messages in tcpIn() is sent to a #ServiceActivator method inside a separate #Component Class, which is structured like so :
#Component
public class TcpServiceActivator {
#Autowired
public TcpServiceActivator() {
}
#ServiceActivator(inputChannel = "tcpIn", outputChannel = "serviceChannel")
public String service(byte[] byteMessage) {
// Business Logic returns String Ack Response
}
I don't have issues running a success scenario. My Tcp TestClient gets Ack response as expected.
However, when i try to simulate an exception, say Deserializer Exception, The exception message is not sent back as a response to Tcp Client.
I can see my Application Listener getting TcpDeserializationExceptionEvent and sending the message to exceptionEventChannel. The #ServiceActivator method handle(Message msg) also prints my exception message. But it never reaches the breakpoints (in a debug mode) inside MessageHandler method out(AbstractServerConnectionFactory cf).
I am struggling to understand whats going wrong. Thanks for any help in advance.
UPDATE : I notice that the Socket is closed due to exception before the response can be sent. I'm trying to figure out a way around this
SOLUTION UPDATE (12th Mar 2019) :
Courtesy of Gary, i edited my deserializer to return a message that can be traced by a #Router method and redirected to errorChannel. The ServiceActivator listening to errorchannel then sends the desired error message to outputChannel . This solution seems to work.
My deserializer method inside ByteArrayLengthPrefixSerializer returning a "special value" as Gary recommended, instead of the original inputStream message.
public byte[] doDeserialize(InputStream inputStream, byte[] buffer) throws IOException {
boolean isValidMessage = false;
try {
int messageLength = this.readPrefix(inputStream);
if (messageLength > 0 && fillUntilMaxDeterminedSize(inputStream, buffer, messageLength)) {
return this.copyToSizedArray(buffer, messageLength);
}
return EventType.MSG_INVALID.getName().getBytes();
} catch (SoftEndOfStreamException eose) {
return EventType.MSG_INVALID.getName().getBytes();
}
}
I also made a few new channels to accommodate my Router such that the flow is as follows :
Success flow
tcpIn (#Router) -> serviceChannel(#serviceActivator that holds business logic) -> outputChannel (#serviceActivator that sends response to client)
Exception flow
tcpIn (#Router) -> errorChannel(#serviceActivator that prepares the error Response message) -> outputChannel (#serviceActivator that sends response to client)
My #Router and 'errorHandling' #serviceActivator -
#Router(inputChannel = "tcpIn", defaultOutputChannel = "errorChannel")
public String messageRouter(byte[] byteMessage) {
String unfilteredMessage = new String(byteMessage, StandardCharsets.US_ASCII);
System.out.println("------------------> "+unfilteredMessage);
if (Arrays.equals(EventType.MSG_INVALID.getName().getBytes(), byteMessage)) {
return "errorChannel";
}
return "serviceChannel";
}
#ServiceActivator(inputChannel = "errorChannel", outputChannel = "outputChannel")
public String errorHandler(byte[] byteMessage) {
return Message.ACK_RETRY;
}
The error channel is for handling exceptions that occur while processing a message. Deserialization errors occur before a message is created (the deserializer decodes the payload for the message).
Deserialization exceptions are fatal and, as you have observed, the socket is closed.
One option would be to catch the exception in the deserializer and return a "special" value that indicates a deserialization exception occurred, then check for that value in your main flow.

How to pass object from controller to step in Spring Batch

I want to pass reqData form My Controller class to Step of my job,Is there any way to achieve the same any help will be appreciated. I have a Object of HttpRequestData which i have revived in controller. Thanks
HttpRequestController.java
package com.npst.imps.controller;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.JobParametersBuilder;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.item.ExecutionContext;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import com.npst.imps.utils.HttpRequestData;
import com.npst.imps.utils.TransactionResponseData;
import javax.servlet.http.HttpSession;
#RestController
public class HttpRequestController {
TransactionResponseData transactionResponseData;
#Autowired
HttpSession session;
JobExecution jobExecution;
#Autowired
JobLauncher jobLauncher;
#Autowired
Job fundtrans;
String test;
#RequestMapping("/impsft")
public String handleHttpRequest(#RequestBody HttpRequestData reqData) throws Exception {
Logger logger = LoggerFactory.getLogger(this.getClass());
try {
JobParameters jobParameters = new JobParametersBuilder().addLong("time", System.currentTimeMillis()).toJobParameters();
jobExecution = jobLauncher.run(fundtrans, jobParameters);
ExecutionContext context= jobExecution.getExecutionContext();
//context.put("reqData", reqData);
transactionResponseData=(TransactionResponseData) context.get("transactionData");
//System.out.println(context.get("transactionResponseData"));
} catch (Exception e) {
logger.info(e.getMessage());
e.printStackTrace();
}
return reqData+" "+transactionResponseData.getMsg()+",Tid="+transactionResponseData.getTid();
}
}
Below is my step class
I want to get the same reqData in my step class and from here on wards i will put inside step Execution object of doAfter method.
PrepareTransactionId.java
package com.npst.imps.action;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Date;
import javax.servlet.http.HttpSession;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.batch.core.ExitStatus;
import org.springframework.batch.core.StepContribution;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.StepExecutionListener;
import org.springframework.batch.core.scope.context.ChunkContext;
import org.springframework.batch.core.step.tasklet.Tasklet;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.npst.imps.service.TransactionService;
import com.npst.imps.utils.GenericTicketKey;
import com.npst.imps.utils.HttpRequestData;
import com.npst.imps.utils.TicketGenerator;
import com.npst.imps.utils.TransactionResponseData;
#Service
public class PrepareTransactionId implements Tasklet,StepExecutionListener{
static Logger logger = LoggerFactory.getLogger(PrepareTransactionId.class);
String appId;
private static TicketGenerator ticketGenerator = null;
private static GenericTicketKey genericTicketKey = null;
#Autowired
HttpSession session;
#Autowired
TransactionService transactionService;
#Override
public ExitStatus afterStep(StepExecution stepExecution) {
try {
DateFormat dateFormat = new SimpleDateFormat("yyyyMMddHHmmss");
Date date = new Date();
String ticket;
System.out.println("transactionService:: PrepareTransactionId"+transactionService);
TransactionResponseData transactionData=new TransactionResponseData();
System.out.println("reqData::"+reqData);
long value=transactionService.getMaxTid(appId);
logger.info("Max id From db::"+value);
if (value == 0) {
value = System.currentTimeMillis() / 10000;
long l = value;
ticket=l+"";
}
long l = value + 1;
ticketGenerator = TicketGenerator.getInstance(9999999999L, 0, l);
genericTicketKey = new GenericTicketKey(0, false, 10);
ticket = ticketGenerator.getNextEdgeTicketFor(genericTicketKey);
stepExecution.getJobExecution().getExecutionContext().put("ticket", ticket);
ticket=appId+ticket;
System.out.println("tid::"+ticket);
stepExecution.getJobExecution().getExecutionContext().put("tid", ticket);
stepExecution.getJobExecution().getExecutionContext().put("reqData", reqData);
transactionData.setMsg("Request Recived...");
transactionData.setTid(ticket+"");
transactionData.setNodeId(appId);
transactionData.setReqtime(dateFormat.format(date));;
stepExecution.getJobExecution().getExecutionContext().put("transactionData", transactionData);
logger.info("Request Recived with tid::"+ticket);
ExitStatus exist=new ExitStatus("SUCCESS", "success");
return exist.replaceExitCode("SUCCESS");
}
catch(Exception e) {
e.printStackTrace();
return ExitStatus.FAILED;
}
}
public String getAppId() {
return appId;
}
public void setAppId(String appId) {
this.appId = appId;
}
#Override
public void beforeStep(StepExecution arg0) {
// TODO Auto-generated method stub
}
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
return null;
}
}
TL;DR -> You can't.
JobParameters instances can only hold values of types:
String
Long
Date
Double.
The reason behind it is primarily persistence. Remember that all spring batch metadata (including job parameters) goes to a datasource.
To use custom objects, you would need to make sure that your object is immutable and thread-safe.
JobParameters documentation states:
Value object representing runtime parameters to a batch job. Because
the parameters have no individual meaning outside of the JobParameters
they are contained within, it is a value object rather than an entity.
It is also extremely important that a parameters object can be
reliably compared to another for equality, in order to determine if
one JobParameters object equals another. Furthermore, because these
parameters will need to be persisted, it is vital that the types added
are restricted. This class is immutable and therefore thread-safe.
JobParametersBuilder documentation states as well:
Helper class for creating JobParameters. Useful because all
JobParameter objects are immutable, and must be instantiated
separately to ensure typesafety. Once created, it can be used in the
same was a java.lang.StringBuilder (except, order is irrelevant), by
adding various parameter types and creating a valid JobParameters once
finished.
But i promise my objects are ok. Can I use them?
You could, but Spring developers decide to not support this feature a long time ago.
This was discussed in spring forums and even a JIRA ticket was created - status Won't fix.
Related Links
Spring - JobParameters JavaDocs
Spring - JobParametersBuilder JavaDocs
Spring - JIRA Ticket
Spring - Forums Discussion
I will not suggest to pass complete HttpRequestData. Rather than pass only requires information to batch. You can pass this information using JobParameters.
sample code
JobParameters parameters = new JobParametersBuilder().addString("key1",HttpRequestData.gteData)
.addString("key2",HttpRequestData.gteData)
.addString("key3",HttpRequestData.gteData)
.toJobParameters();
now in step you can get JobParameters from StepExecution
putting custom object in JobParameters
HashMap<String, JobParameter>();
JobParameter myParameter = new JobParameter(your custom object);
map.put("myobject", myParameter);
JobParameters jobParameters = new JobParameters(map);

Resources