How to create a micronaut AWS Lambda function triggered using an S3Event? - aws-lambda

I looked at the micronaut documentation at https://docs.micronaut.io/latest/guide/index.html#functionBean and all examples assume events are coming from API Gateway and the request body is sent out as a POJO. Can Micronaut also support S3Event and all other AWS Lambda event types for it's serverless functions? Example: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html#with-s3-example-deployment-pkg-java
Can something like the below be supported? I didn't find how java Functions are mapped to RequestHandler<S3Event, String> that AWS typically expects in Micronaut.
package example;
import io.micronaut.function.FunctionBean;
import java.util.function.Consumer;
#FunctionBean("hello-world-java")
public class HelloJavaFunction implements Function<S3Event, String> {
#Override
public String apply(S3Event) {
return "Hello world!";
}
}

It could be done also using MicronautRequestHandler.
#FunctionBean("hello-world-java)
public class HelloJavaFunction extends MicronautRequestHandler<S3Event, String> {
#Override
public String execute(final S3Event event) {
return "Hello world!";
}
}

Related

Quarkus #CacheResult is not working properly

I am trying to use quarkus-cache by following the appropriate quarkus doc. I have the below code setup
#ApplicationScoped
class MyClass {
public result doSomething() {
String remoteData = getRemoteData(url);
}
#CacheResult(cacheName = "myCacheName")
public String getRemoteData(String url) {
return remoteCall(url);
}
}
Usage
// Grpc impl class
// call to Myclass.doSomething()
Execution is not proceeding further when getRemoteData() is called the first time. Also, not getting any error.
Am I missing something?

How to publish a message to Kafka topic using spring cloud stream in reactive way [using webflux]?

Publish a message to kafka topic without using StreamBridge as it uses deprecated components.
Using reactor API:
All you need to do is declare a Supplier<Flux<whatever>> which returns
EmitterProcessor from the reactor API (see Reactive Functions support
for more details) to effectively provide a bridge between the actual
event source (foreign source) and spring-cloud-stream. All you need to
do now is feed the EmitterProcessor with data via
EmitterProcessor#onNext(data) operation.
Quoted from spring cloud stream docs
#SpringBootApplication
#Controller
public class WebSourceApplication {
public static void main(String[] args) {
SpringApplication.run(WebSourceApplication.class);
}
EmitterProcessor<String> processor = EmitterProcessor.create();
#RequestMapping
#ResponseStatus(HttpStatus.ACCEPTED)
public void delegateToSupplier(#RequestBody String body) {
processor.onNext(body);
}
#Bean
public Supplier<Flux<String>> supplier() {
return () -> this.processor;
}
}
To send a message use curl curl -H "Content-Type: text/plain" -X POST -d "hello from the other side" http://localhost:8080/

Example junit5 pact message provider test

I have been able to convert message consumer pact tests to junit5, but am not sure how to use the information in the junit5 provider readme to convert the corresponding message provider verification tests. Can someone point to an example or suggest an outline of how the provider tests for message queue providers are supposed to work with the PactVerificationcontext?
I am trying to convert something like:
import au.com.dius.pact.provider.PactVerifyProvider;
import au.com.dius.pact.provider.junit.Consumer;
import au.com.dius.pact.provider.junit.PactRunner;
import au.com.dius.pact.provider.junit.Provider;
import au.com.dius.pact.provider.junit.loader.PactFolder;
import au.com.dius.pact.provider.junit.target.AmqpTarget;
import au.com.dius.pact.provider.junit.target.Target;
import au.com.dius.pact.provider.junit.target.TestTarget;
#RunWith(PactRunner.class)
#Provider("provider")
#Consumer("consumer")
#PactFolder("target/pacts")
public class MessageProviderPact {
#TestTarget
public final Target target = new AmqpTarget();
private KafkaTemplate<String, MsgObject> kafkaTemplate
= (KafkaTemplate<String, MsgObject>)Mockito.mock(KafkaTemplate.class);
private MessageProducer messageProducer = new MessageProducer(kafkaTemplate);
#Test
#PactVerifyProvider("case a")
public String verifyCaseA() throws IOException {
// given
ListenableFuture<SendResult<String, MsgObject>> future =
mock(ListenableFuture.class);
doReturn(future).when(kafkaTemplate).send(any(String.class),
any(MsgObject.class));
// when
DomainObj domainObj = new DomainObj();
String topic = "kafka_add";
messageProducer.send(topic, domainObj);
// then
ArgumentCaptor<MsgObject> messageCapture = ArgumentCaptor.forClass(
MsgObject.class);
verify(kafkaTemplate, times(1)).send(eq(topic),
messageCapture.capture());
// returning the message
return objectMapper.writeValueAsString(messageCapture.getValue());
}
}
You should not use the kafka template to verify the Pact message, you might have created the test Object for unit testing in order to test the Messages you can use the same test Objects. You can find the full implementation here.
An example can be found in the pact-jvm project repo
The relevant code has been included below:
#Provider("AmqpProvider")
#PactFolder("src/test/resources/amqp_pacts")
public class AmqpContractTest {
private static final Logger LOGGER = LoggerFactory.getLogger(AmqpContractTest.class);
#TestTemplate
#ExtendWith(PactVerificationInvocationContextProvider.class)
void testTemplate(Pact pact, Interaction interaction, PactVerificationContext context) {
LOGGER.info("testTemplate called: " + pact.getProvider().getName() + ", " + interaction);
context.verifyInteraction();
}
#BeforeEach
void before(PactVerificationContext context) {
context.setTarget(new MessageTestTarget());
}
#State("SomeProviderState")
public void someProviderState() {
LOGGER.info("SomeProviderState callback");
}
#PactVerifyProvider("a test message")
public String verifyMessageForOrder() {
return "{\"testParam1\": \"value1\",\"testParam2\": \"value2\"}";
}
}

How to trigger a retry with Spring Cloud Functions with AWS Lambda and SNS Events

I have a Spring Cloud Function running on AWS Lambda handling SNS Events.
For some error cases I would like to trigger automatic Lambda retries or triggerthe retry capabilities of the SNS Service. SNS Retry Policies are in default configuration.
I tried to return a JSON with {"statusCode":500}, which is working when we make a test invokation in the aws console.
Anyway when we send this status, no retry invokation of the Function is triggered.
We use the SpringBootRequestHandler
public class CustomerUpdatePersonHandler extends SpringBootRequestHandler<SNSEvent, Response> {
}
#Component
public class CustomerUpdatePerson implements Function<SNSEvent, Response> {
#Override
public Response apply(final SNSEvent snsEvent) {
//when something goes wrong return 500 and trigger a retry
return new Response(500)
}
}
public class Response{
private int statusCode;
public Response(int code){
this.statusCode = code;
}
public int getStatusCode(){
retrun statusCode;
}
}
We currently don't provide support for retry, but given that every function is transformed to reactive function anyway you can certainly do it yourself if you declare your function using reactor API. Basically Function<Flux<SNSEvent>, Flux<Response>> and then you can use one of the retry operations available (e.g., retryBackoff).

Sonarqube PostProjectAnalysisTask example?

I have been searching for any PostProjectAnalysisTask working code example, with no look. This page states that HipChat plugin uses this hook, but it seems to me that it still uses the legacy PostJob extension point ...
There is an example on their page now.
https://docs.sonarqube.org/display/DEV/Adding+Hooks
import org.sonar.api.ce.posttask.PostProjectAnalysisTask;
import org.sonar.api.server.Server;
public class MyHook implements PostProjectAnalysisTask {
private final Server server;
public MyHook(Server server) {
this.server = server;
}
#Override
public void finished(ProjectAnalysis analysis) {
QualityGate gate = analysis.getQualityGate();
if (gate.getStatus()== QualityGate.Status.ERROR) {
String baseUrl = server.getURL();
// TODO send notification
}
}

Resources