Calling SNS services from Lambda - aws-lambda

I need help with Java AWS-Lambda.
I am working on Lambda to update the database. There is cloudwatch event trigger and invoke Lambda for an update database based on some criteria. I want to send notification as soon as DB update. I want to invoke SNS services from Lambda.
I am using SNS account which running with multi-factor authentication. Please suggest how to create/setup lambda to send SNS.
Note: I am not looking at "how to trigger Lambda from SNS". I am looking at how to send SNS from the Lambda function IN JAVA.

This is a simple way of publishing a message to an SNS topic using aws java sdk.
To publish message with attributes use this java code sample Publish a Message with Attributes to an Amazon SNS Topic
You basically have to use the same code in your lambda handler.
you need to grant your Lambda IAM role permissions to publish to your SNS topic using appropriate IAM Policy.
// Publish a message to an Amazon SNS topic.
final String msg = "If you receive this message, publishing a message to an Amazon SNS topic works.";
final PublishRequest publishRequest = new PublishRequest(topicArn, msg);
final PublishResult publishResponse = snsClient.publish(publishRequest);
// Print the MessageId of the message.
System.out.println("MessageId: " + publishResponse.getMessageId());
This is how i have done in my project created a helperservice and used in my lambdahandler.
// my lambda handler
public ApiGatewayResponse handleRequest(ApiGatewayRequest request, Context context) {
try {
// call to the helperservice
PublishResult publishResult = helperService.publishMessage("Message to be published to sns topic");
//return response
return buildResponse(HttpStatus.SC_OK);
} catch(Exception e) {
this.logger.error(e.getMessage(), e);
return buildResponse(e.getMessage(), HttpStatus.SC_INTERNAL_SERVER_ERROR);
}
}
// HerperService class class
public class HelperService {
private final AmazonSNS snsClient;
private final Configuration config;
#Inject
public HelperService(
AmazonSNS snsClient,
Configuration config,
) {
this.snsClient = snsClient;
this.config = config;
}
public PublishResult publishMessage(String message) {
PublishRequest publishRequest = new PublishRequest(config.getAuditSNSTopicARN(), message);
return snsClient.publish(publishRequest);
}
}

Related

consumption of events stopped after the consumer throw an exception in spring cloud stream?

I have an aggregation function that aggregates events published into output-channel. I have subscribed to the flux generated by the function like below:
#Component
public class EventsAggregator {
#Autowired
private Sinks.Many<Message<?>> eventsPublisher; // Used to publish events from different places in the code
private final Function<Flux<Message<?>>, Flux<Message<?>>> aggregatorFunction;
#PostConstruct
public void aggregate() {
Flux<Message<?>> output = aggregatorFunction.apply(eventsPublisher.asFlux());
output.subscribe(this::aggregate);
}
public void aggregate(Message<?> aggregatedEventsMessage) {
if (...) {
//some code
} else {
throw new RuntimeException();
}
}
}
If the RuntimeException is thrown, the aggregation function does not work, and I get this message The [bean 'outputChannel'; defined in: 'class path resource [org/springframework/cloud/fn/aggregator/AggregatorFunctionConfiguration.class]'; from source: 'org.springframework.cloud.fn.aggregator.AggregatorFunctionConfiguration.outputChannel()'] doesn't have subscribers to accept messages at org.springframework.util.Assert.state(Assert.java:97)
Is there any way to subscribe to the flux generated by the aggregation function in a safe way?
That's correct. That's how Reactive Streams work: if an exception is thrown, the subscriber is cancelled and no new data can be send to that subscriber anymore.
Consider to not throw that exception up to the stream.
See more in docs: https://docs.spring.io/spring-cloud-stream/docs/4.0.0-SNAPSHOT/reference/html/spring-cloud-stream.html#spring-cloud-stream-overview-error-handling

Spring cloud function Function interface return success/failure handling

I currently have a spring cloud stream application that has a listener function that mainly listens to a certain topic and executes the following in sequence:
Consume messages from a topic
Store consumed message in the DB
Call an external service for some information
Process the data
Record the results in DB
Send the message to another topic
Acknowledge the message (I have the acknowledge mode set to manual)
We have decided to move to Spring cloud function, and I have been already able to already do almost all the steps above using the Function interface, with the source topic as input and the sink topic as an output.
#Bean
public Function<Message<NotificationMessage>, Message<ValidatedEvent>> validatedProducts() {
return message -> {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
return MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
}
}
My problem goes with exception handling in step 7 (Acknowledge the message). We only acknowledge the message if we are sure that it was sent successfully to the sink queue, otherwise we do no acknowledge the message.
My question is, how can such a thing be implemented within Spring cloud function, specially that the send method is fully dependant on the Spring Framework (as the result of the function interface implementation evaluation).
earlier, we could do this through try/catch
#StreamListener(value = NotificationMesage.INPUT)
public void onMessage(Message<NotificationMessage> message) {
try {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
}
Is there a listener that triggers after the Function interface have returned successfully, something like KafkaSendCallback but without specifying a template
Building upon what Oleg mentioned above, if you want to strictly restore the behavior in your StreamListener code, here is something you can try. Instead of using a function, you can switch to a consumer and then use KafkaTemplate to send on the outbound as you had previously.
#Bean
public Consumer<Message<NotificationMessage>> validatedProducts() {
return message -> {
try{
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message); //here, you make sure that the data was sent successfully by using some callback.
//only ack if the data was sent successfully.
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}
catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
};
}
Another thing that is worth looking into is using Kafka transactions, in which case if it doesn't work end-to-end, no acknowledgment will happen. Spring Cloud Stream binder has support for this based on the foundations in Spring for Apache Kafka. More details here. Here is the Spring Cloud Stream doc on this.
Spring cloud stream has no knowledge of function. It is just the same message handler as it was before, so the same approach with callback as you used before would work with functions. So perhaps you can share some code that could clarify what you mean? I also don't understand what do you mean by ..send method is fully dependant on the Spring Framework..
Alright, So what I opted in was actually not to use KafkaTemplate (Or streamBridge)for that matter. While it is a feasible solution it would mean that my Function is going to be split into Consumer and some sort of an improvised supplied (the KafkaTemplate in this case).
As I wanted to adhere to the design goals of the functional interface, I have isolated the behaviour for Database update in a ProducerListener interface implementation
#Configuration
public class ProducerListenerConfiguration {
private final MongoTemplate mongoTemplate;
public ProducerListenerConfiguration(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public ProducerListener myProducerListener() {
return new ProducerListener() {
#SneakyThrows
#Override
public void onSuccess(ProducerRecord producerRecord, RecordMetadata recordMetadata) {
final ValidatedEvent event = new ObjectMapper().readerFor(ValidatedEvent.class).readValue((byte[]) producerRecord.value());
final var updateResult = updateDocumentProcessedState(event.getKey(), event.getPayload().getVersion(), true);
}
#SneakyThrows
#Override
public void onError(ProducerRecord producerRecord, #Nullable RecordMetadata recordMetadata, Exception exception) {
ProducerListener.super.onError(producerRecord, recordMetadata, exception);
}
};
}
public UpdateResult updateDocumentProcessedState(String id, long version, boolean isProcessed) {
Query query = new Query();
query.addCriteria(Criteria.where("_id").is(id));
Update update = new Update();
update.set("processed", isProcessed);
update.set("version", version);
return mongoTemplate.updateFirst(query, update, ProductChangedEntity.class);
}
}
Then with each successful attempt, the DB is updated with the processing result and the updated version number.

How to trigger a retry with Spring Cloud Functions with AWS Lambda and SNS Events

I have a Spring Cloud Function running on AWS Lambda handling SNS Events.
For some error cases I would like to trigger automatic Lambda retries or triggerthe retry capabilities of the SNS Service. SNS Retry Policies are in default configuration.
I tried to return a JSON with {"statusCode":500}, which is working when we make a test invokation in the aws console.
Anyway when we send this status, no retry invokation of the Function is triggered.
We use the SpringBootRequestHandler
public class CustomerUpdatePersonHandler extends SpringBootRequestHandler<SNSEvent, Response> {
}
#Component
public class CustomerUpdatePerson implements Function<SNSEvent, Response> {
#Override
public Response apply(final SNSEvent snsEvent) {
//when something goes wrong return 500 and trigger a retry
return new Response(500)
}
}
public class Response{
private int statusCode;
public Response(int code){
this.statusCode = code;
}
public int getStatusCode(){
retrun statusCode;
}
}
We currently don't provide support for retry, but given that every function is transformed to reactive function anyway you can certainly do it yourself if you declare your function using reactor API. Basically Function<Flux<SNSEvent>, Flux<Response>> and then you can use one of the retry operations available (e.g., retryBackoff).

AMQP unable to receive message back from listener

I have a issue with Receive message back from Listener to publisher. I am getting
**AmqpReplyTimeoutException **. Below is the code of Publisher from where i am publishing to queue.
for(CsvWrapperPojo item : items){
resultList.addAll(item.getDbResultList());
for(CSVPojo pojo :item.getQueueRequestList()){
sampleResponseMessageRabbitConverterFuture= asyncRabbitTemplate.convertSendAndReceive("spring-boot-rabbitmq-Interactive.async_Solve_InteractiveMsg", "Interactive_RequestQueue", pojo);
//CSVPojo res =(CSVPojo)rabbitTemplate.convertSendAndReceive("spring-boot-rabbitmq-Interactive.async_Solve_InteractiveMsg", "Interactive_RequestQueue", pojo);
System.out.println("heyyyyyy:" + sampleResponseMessageRabbitConverterFuture.get().getLatitute());
//resultList.add(res);
//resultList.add(sampleResponseMessageRabbitConverterFuture.get());
}
}
By using it i am able to publish to queue, i have subscriber code below.
#EnableRabbit
public class ListenerQueueSubscriber {
#RabbitHandler
#RabbitListener(containerFactory = "simpleMessageListenerContainerFactory", queues ="Interactive_RequestQueue")
public void subscribeToRequestQueue(#Payload CSVPojo sampleRequestMessage, Message message) throws InterruptedException {
System.out.println("inside listener");
sampleRequestMessage.setResult("Hello");
Thread.sleep(120000);
System.out.println("After sleep:" +sampleRequestMessage.getLongitude());
//return sampleRequestMessage;
}
}
By using above subscriber able to listen message and i am appending "Hello and put sleep for 2 minutes and after that i have to receive the message back to publisher from where i have published . But unfortunately not receiving the message with Hello appended getting **AmqpReplyTimeoutException **. Can please help to achieve this behavior.
Thanks in advance!!!!

Not able to to filter messages received using condition attribute in Spring Cloud Stream #StreamListener annotation

I am trying to create a event based system for communicating between services using Apache Kafka as Messaging system and Spring Cloud Stream Kafka.
I have written my Receiver class methods as below,
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeCreatedEvent'")
public void handleEmployeeCreatedEvent(#Payload String payload) {
logger.info("Received EmployeeCreatedEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeCreatedEvent.
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeTransferredEvent'")
public void handleEmployeeTransferredEvent(#Payload String payload) {
logger.info("Received EmployeeTransferredEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeTransferredEvent.
#StreamListener(target = Sink.INPUT)
public void handleDefaultEvent(#Payload String payload) {
logger.info("Received payload: " + payload);
}
This is the default method.
When I run the application, I am not able to see the methods annoated with condition attribute being called. I only see the handleDefaultEvent method being called.
I am sending a message to this Receiver Application from the Sending/Source App using the below CustomMessageSource class as below,
#Component
#EnableBinding(Source.class)
public class CustomMessageSource {
#Autowired
private Source source;
public void sendMessage(String payload,String eventType) {
Message<String> myMessage = MessageBuilder.withPayload(payload)
.setHeader("eventType", eventType)
.build();
source.output().send(myMessage);
}
}
I am calling the method from my controller in Source App as below,
customMessageSource.sendMessage("Hello","EmployeeCreatedEvent");
The customMessageSource instance is autowired as below,
#Autowired
CustomMessageSource customMessageSource;
Basicaly, I would like to filter messages received by the Sink/Receiver application and handle them accordingly.
For this I have used the #StreamListener annotation with condition attribute to simulate the behaviour of handling different events.
I am using Spring Cloud Stream Chelsea.SR2 version.
Can someone help me in resolving this issue.
It seems like the headers are not propagated. Make sure you include the custom headers in spring.cloud.stream.kafka.binder.headers http://docs.spring.io/autorepo/docs/spring-cloud-stream-docs/Chelsea.SR2/reference/htmlsingle/#_kafka_binder_properties .

Resources