Multiple functions in Spring Cloud Function on AWS lambda - spring

I have a Spring Cloud Function application with two functions:
#Component
public class MyFunctionOne implements Function<Object, Boolean> {
#Override
public Boolean apply(Object o) {
return true;
}
}
#Component
public class MyFunctionTwo implements Function<Object, Boolean> {
#Override
public Boolean apply(Object o) {
return true;
}
}
I have both spring-cloud-starter-function-web and spring-cloud-function-adapter-aws in my project dependencies.
I would like to call both MyFunctionOne and MyFunctionTwo separately.
I can achieve this running locally in two ways. I can do this either calling function directly (since I am using spring-cloud-starter-function-web) by calling localhost:8080/myFunctionOne. Or I can use the Function Routing functionality by calling localhost:8080/functionRouter and supplying myFunctionOne in spring.cloud.function.definition http header. This works fine and I can both trigger MyFunctionOne and MyFunctionTwo separately.
I have deployed the module to AWS Lambda. How do I supply spring.cloud.function.definition dynamically? According to documentation I can use org.springframework.cloud.function.adapter.aws.FunctionInvoker as a AWS Lambda handler. Or, alternatively I can define my own SpringBootStreamHandler. However, it seems that it is not possible to define spring.cloud.function.definition dynamically.
Is there a way to choose a function in Spring Cloud Function deployed to AWS lambda?

So, we have a bit of an issue with routing on AWS and it is being fixed as we speak. You can follow this issue - https://github.com/spring-cloud/spring-cloud-function/issues/698.
In any event, we'll have a new release early next week (3.1.3-RELEASE) which will include the fixes, samples and docs to address the issue you're describing

Related

#Gateway is not required if we use #MessagingGateway

From Spring 4.0 onwards #MessagingGateway was introdued. Using it if we have only one gateway method in our Gateway interface , then we don't need to annotate the Gateway method with #Gateway.
Below is my example, where both are working.
So, my question is can we stop using #Gateway when we have only one method in Gateway interface?
Code-1:
#MessagingGateway(name="demoGateway")
public interface DemoGateway {
#Gateway(requestChannel = "gatewayRequestChannel",replyChannel = "nullChannel")
void accept(Message<String> request);
}
Code-2:
#MessagingGateway(name="demoGateway",defaultRequestChannel =
"gatewayRequestChannel",defaultReplyChannel = "nullChannel")
public interface DemoGateway {
void accept(Message<String> request);
}
Yes. You are right. You can do approach 2 and leave the single method that confirms to the default configuration of #MessagingGateway without annotation.
However in practice, I will only move the truly default values to the MessagingGateway and leave other values to #Gateway annotation.
This is because it makes life and readability easier in the future if you have to add more methods to DemoGateway in the future.

How to have dynamic base URL with Quarkus MicroProfile Rest Client?

Quarkus using Rest Client, explains how to use the MicroProfile REST Client. For Base URL application.properties can be used.
org.acme.restclient.CountriesService/mp-rest/url=https://restcountries.eu/rest #
With above approach, cant have dynamic base URL.
Able to achieve it by using RestClientBuilder as explained in MicroProfile Rest Client. Downside of this approach is not having auto-negotiation capability.
SimpleGetApi simpleGetApi = RestClientBuilder.newBuilder().baseUri(getApplicationUri()).build(SimpleGetApi.class);
Is there other or better way to achieve this? Thanks.
While it is true, that the MP Rest CLient does not allow you to set the BaseUri dynamically when you use declarative/Injected clients, there are some (albeit hacky) ways how to achieve that.
One is to use standard ClientRequestFilter which can modify the URL:
#Provider
#Slf4j
public class Filter implements ClientRequestFilter {
#Inject RequestScopeHelper helper;
#Override
public void filter(ClientRequestContext requestContext) throws IOException {
if (helper.getUrl() != null) {
URI newUri = URI.create(requestContext.getUri().toString().replace("https://originalhost.com", helper.getUrl()));
requestContext.setUri(newUri);
}
}
}
Where RequestScopeHelper is some help class (e.g. request scoped bean) through which you can pass the dynamic url, for example:
#Inject
RequestScopeHelper helper;
#Inject
#RestClient
TestIface myApiClient;
public void callSomeAPIWithDynamicBaseUri(String dynamic) {
helper.setUrl(dynamic);
myApiClient.someMethod();
}
Second is to use MP rest client SPI, namely the RestClientListener which allows you to modify the rest clients after they are built.
For this to work, you have to set the scope of your rest client to RequestScoped so that new instance is created for each request(if you use singleton for example, then the client is only created once and your listener will only be called once). This you can do via quarkus properties:
quarkus.rest-client."com.example.MyRestIface".scope=javax.enterprise.context.RequestScoped
public class MyListener implements RestClientListener {
#Override
public void onNewClient(Class<?> serviceInterface, RestClientBuilder builder) {
String newUri = //obtain dynamic URI from somewhere e.g. again request scope bean lookup, or maybe dynamic config source (create new in-memory ConfigSource, before you invoke your rest client set the corresponding rest client url property to your dynamic value, then inside this listener use ConfigProvider.getConfig().getProperty...)
builder.baseUri(URI.create(newUri));
}
}
Don't forget to register this listener as service provider(META-INF/services/org.eclipse.microprofile.rest.client.spi.RestClientListener)
Another option is to use custom CDI producer that would produce the Rest client instances for you; then you could control all client config yourself. You can use the RestClientBase from Quarkus rest client which is exactly what Quarkus uses under the hood during deployment phase to construct client instances. You will however have to duplicate all the logic related to registration of handlers, interceptors etc.
Do keep in mind, that any of these solutions will make the debugging and problem analysis more challenging - because you will now have multiple places, where the URI is controlled(MP config/quarkus properties, env vars, your custom impl...), so you need to be careful with your approach and maybe add some explicit log messages when you override the URI manually.
MicroProfile REST Client in Quarkus does allow you to use dynamic base URL with that simple "hack" :
Just put an empty String in #Path annotations for you API interface like that :
import javax.ws.rs.GET;
import javax.ws.rs.Path;
#Path("")
public interface SimpleGetApi {
#Path("")
#GET
String callWithDynmamicUrl(); //it can be String or any return type you want
}
After that you are ready to call your dynamic base URL :
import org.eclipse.microprofile.rest.client.RestClientBuilder;
import java.net.URI;
public class Example {
public static void main(String[] args) {
URI anyDynamicUrl = URI.create("http://restcountries.eu/rest/some/dynamic/path");
SimpleGetApi simpleGetApi = RestClientBuilder.newBuilder().baseUri(anyDynamicUrl)
.build(SimpleGetApi.class);
simpleGetApi.callWithDynmamicUrl();
}
}

What's the "Right Way" to send a data changed websocket event and ensure the database is committed in Spring Boot

Note: read the end of the answer for the way I implemented #Nonika's suggestions
What's the "right way" to send a websocket event on data insert?
I'm using a Spring Boot server with SQL/JPA and non-stomp websockets. I need to use "plain" websockets as I'm using Java clients where (AFAIK) there's no stomp support.
When I make a change to the database I need to send the event to the client so I ended up with an implementation like this:
#Transactional
public void addEntity(...) {
performActualEntityAdding();
sendEntityAddedEvent(eventData);
}
#Transactional
public void sendEntityAddedEvent(String eventData) {
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter() {
#Override
public void afterCommit() {
sendEntityAddedEventAsync(eventData);
}
});
}
#Async
public void sendEntityAddedEventAsync(String eventData) {
// does the websocket session sending...
}
This works. If I would just call the sendEntityAddedEventAsync it would also work for real world scenarios but it fails on unit tests because the event would arrive before transaction commit. As such when the unit test invokes a list of the entities after the event it fails.
This feels like a hack that shouldn't be here. Is there a better way to ensure a commit?
I tried multiple alternative approaches and the problem is that they often worked for 10 runs of the unit tests yet failed every once in a while. That isn't acceptable.
I tried multiple approaches to solve this such as different transaction annotations and splitting the method to accommodate them. E.g read uncommitted, not supported (to force a commit) etc. Nothing worked for all cases and I couldn't find an authoritative answer for this (probably common) use case that wasn't about STOMP (which is pretty different).
Edit
One of my original attempts looked something like this:
// this shouldn't be in a transaction
public void addEntity(...) {
performActualEntityAdding();
sendEntityAddedEvent(eventData);
}
#Transactional
public void performActualEntityAdding(...) {
//....
}
#Async
public void sendEntityAddedEventAsync(String eventData) {
// does the websocket session sending...
}
The assumption here is that when sendEntityAddedEventAsync is invoked the data would already be in the database. It wasn't for a couple of additional milliseconds.
A few additional details:
Test environment is based on h2 (initially I mistakenly wrote hsql)
Project is generated by JHipster
Level 2 cache is used but disabled as NONE for these entities
Solution (based on #Nonika's answer):
The solution for me included something similar to this:
public class WebEvent extends ApplicationEvent {
private ServerEventDAO event;
public WebEvent(Object source, ServerEventDAO event) {
super(source);
this.event = event;
}
public ServerEventDAO getEvent() {
return event;
}
}
#Transactional
public void addEntity(...) {
performActualEntityAdding();
applicationEventPublisher.publishEvent(new WebEvent(this, evtDao));
}
#Async
#TransactionalEventListener
public void sendEntityAddedEventAsync(WebEvent eventData) {
// does the websocket session sending...
}
This effectively guarantees that the data is committed properly before sending the event and it runs asynchronously to boot. Very nice and simple.
Spring is using AdviceMode.PROXY for both #Async and #Transactional this is quote from the javadoc:
The default is AdviceMode.PROXY. Please note that proxy mode allows
for interception of calls through the proxy only. Local calls within
the same class cannot get intercepted that way; an Async annotation on
such a method within a local call will be ignored since Spring's
interceptor does not even kick in for such a runtime scenario. For a
more advanced mode of interception, consider switching this to
AdviceMode.ASPECTJ.
This rule is common for almost all spring annotations which requires proxy to operate.
Into your first example, you have a #Transactional annotation on both addEntity(..) and performActualEntityAdding(..). I suppose you call addEntity from another class so #Transactional works as expected. process in this scenario can be described in this flow
// -> N1 transaction starts
addEntity(){
performActualEntityAdding()//-> we are still in transaction N1
sendEntityAddedEvent() // -> call to this #Async is a class local call, so this advice is ignored. But if this was an async call this would not work either.
}
//N1 transaction commits;
That's why the test fails. it gets an event that there is a change into the db, but there is nothing because the transaction has not been committed yet.
Scenario 2.
When you don't have a #Transactional addEntity(..) then second transaction for performActualEntityAdding not starts as there is a local call too.
Options:
You can use some middleware class to call these methods to trigger
spring interceptors.
you can use Self injection with Spring
if you have Spring 5.0 there is handy #TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)

Implementation of DynamoDB for Spring Boot

I am trying to implement a backend DynamoDB for my Spring Boot application. But AWS recently updated their SDKs for DynamoDB. Therefore, almost all of the tutorials available on the internet, such as http://www.baeldung.com/spring-data-dynamodb, aren't directly relevant.
I've read through Amazon's SDK documentation regarding the DynamoDB class. Specifically, the way the object is instantiated and endpoints/regions set have been altered. In the past, constructing and setting endpoints would look like this:
#Bean
public AmazonDynamoDB amazonDynamoDB() {
AmazonDynamoDB amazonDynamoDB
= new AmazonDynamoDBClient(amazonAWSCredentials());
if (!StringUtils.isEmpty(amazonDynamoDBEndpoint)) {
amazonDynamoDB.setEndpoint(amazonDynamoDBEndpoint);
}
return amazonDynamoDB;
}
#Bean
public AWSCredentials amazonAWSCredentials() {
return new BasicAWSCredentials(
amazonAWSAccessKey, amazonAWSSecretKey);
}
However, the setEndpoint() method is now deprecated, and [AWS documentation][1] states that we should construct the DynamoDB object through a builder:
AmazonDynamoDBClient() Deprecated. use
AmazonDynamoDBClientBuilder.defaultClient()
This other StackOverflow post recommends using this strategy to instantiate the database connection object:
DynamoDB dynamoDB = new DynamoDB(AmazonDynamoDBClientBuilder.standard().withEndpointConfiguration(new EndpointConfiguration("http://localhost:8000", "us-east-1")).build());
Table table = dynamoDB.getTable("Movies");
But I get an error on IntelliJ that DynamoDB is abstract and cannot be instantiated. But I cannot find any documentation on the proper class to extend.
In other words, I've scoured through tutorials, SO, and the AWS documentation, and haven't found what I believe is the correct way to create my client. Can someone provide an implementation that works? I'm specifically trying to set up a client with a local DynamoDB (endpoint at localhost port 8000).
I think I can take a stab at answering my own question. Using the developer guide here for DynamoDB Mapper you can implement a DynamoDB Mapper object that takes in your client and performs data services for you, like loading, querying, deleting, saving (essentially CRUD?). Here's the documentation I found helpful.
I created my own class called DynamoDBMapperClient with this code:
private AmazonDynamoDB amazonDynamoDB = AmazonDynamoDBClientBuilder.standard().withEndpointConfiguration(
new EndpointConfiguration(amazonDynamoDBEndpoint, amazonAWSRegion)).build();
private AWSCredentials awsCredentials = new AWSCredentials() {
#Override
public String getAWSAccessKeyId() {
return null;
}
#Override
public String getAWSSecretKey() {
return null;
}
};
private DynamoDBMapper mapper = new DynamoDBMapper(amazonDynamoDB);
public DynamoDBMapper getMapper() {
return mapper;
}
Basically takes in endpoint and region configurations from a properties file, then instantiates a new mapper that is accessed with a getter.
I know this may not be the complete answer, so I'm leaving this unanswered, but at least it's a start and you guys can tell me what I'm doing wrong!

Configuring Spring MockMvc to use custom argument resolver before built-in ones

I have a straightforward test case. I have a controller which has a parameter of a type Spring doesn't support by default, so I wrote a custom resolver.
I create the mock mvc instance I'm using like so:
mvc = MockMvcBuilders.standaloneSetup(controller).setCustomArgumentResolvers(new GoogleOAuthUserResolver()).build();
However, Spring is also registering almost 30 other argument resolvers, one of which is general enough that it is getting used to resolve the argument before mine. How can I set or sort the resolvers so that mine is invoked first?
This worked for me without reflection:
#RequiredArgsConstructor
#Configuration
public class CustomerNumberArgumentResolverRegistration {
private final RequestMappingHandlerAdapter requestMappingHandlerAdapter;
#PostConstruct
public void prioritizeCustomArgumentResolver () {
final List<HandlerMethodArgumentResolver> argumentResolvers = new ArrayList<>(Objects.requireNonNull(requestMappingHandlerAdapter.getArgumentResolvers()));
argumentResolvers.add(0, new CustomerNumberArgumentResolver());
requestMappingHandlerAdapter.setArgumentResolvers(argumentResolvers);
}
}
The issue was that the People class the Google OAuth library I am using extends Map and the mock servlet API provides no way to manipulate the order in which the handlers are registered.
I ended up using reflection to reach into the mocks guts and remove the offending handler.

Resources