Understanding difference between Custom Handler and SpringBootApiGatewayRequestHandler - spring-boot

I'm new to Spring Cloud Function and came across it as one of best solution for developing FaaS based solution. I am specifically writing application for AWS Lambda Service which is back-end of API Gateway. I ran into very interesting problem with My test application and it is around Handler. My test application works well with Custom Handler written as -
public class UserProfileHandler extends SpringBootRequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
}
which works well when configured as Handler in the AWS Lambda. Then I came across org.springframework.cloud.function.adapter.aws.SpringBootApiGatewayRequestHandler which is available in Spring Cloud Function dependency, so I wanted to get rid of UserProfileHandler hence I changed Handler configuration in AWS Lambda to org.springframework.cloud.function.adapter.aws.SpringBootApiGatewayRequestHandler instead of ...UserProfileHandler and now lambda fails with following error message. Has anyone run into this problem?
{
"errorMessage": "java.util.Optional cannot be cast to com.amazonaws.services.lambda.runtime.events.APIGatewayProxyRequestEvent",
"errorType": "java.lang.ClassCastException",
"stackTrace": [
"com.transformco.hs.css.userprofile.function.UserProfileFunction.apply(UserProfileFunction.java:16)",
"org.springframework.cloud.function.context.catalog.BeanFactoryAwareFunctionRegistry$FunctionInvocationWrapper.invokeFunction(BeanFactoryAwareFunctionRegistry.java:499)",
"org.springframework.cloud.function.context.catalog.BeanFactoryAwareFunctionRegistry$FunctionInvocationWrapper.lambda$doApply$1(BeanFactoryAwareFunctionRegistry.java:543)",
"reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:107)",
"reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:121)",
"reactor.core.publisher.FluxJust$WeakScalarSubscription.request(FluxJust.java:99)",
"reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.request(FluxMapFuseable.java:162)",
"reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.request(FluxMapFuseable.java:162)",
"reactor.core.publisher.BlockingIterable$SubscriberIterator.onSubscribe(BlockingIterable.java:218)",
"reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onSubscribe(FluxMapFuseable.java:90)",
"reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onSubscribe(FluxMapFuseable.java:90)",
"reactor.core.publisher.FluxJust.subscribe(FluxJust.java:70)",
"reactor.core.publisher.InternalFluxOperator.subscribe(InternalFluxOperator.java:53)",
"reactor.core.publisher.BlockingIterable.iterator(BlockingIterable.java:80)",
"org.springframework.cloud.function.adapter.aws.SpringBootRequestHandler.result(SpringBootRequestHandler.java:59)",
"org.springframework.cloud.function.adapter.aws.SpringBootRequestHandler.handleRequest(SpringBootRequestHandler.java:52)",
"org.springframework.cloud.function.adapter.aws.SpringBootApiGatewayRequestHandler.handleRequest(SpringBootApiGatewayRequestHandler.java:140)",
"org.springframework.cloud.function.adapter.aws.SpringBootApiGatewayRequestHandler.handleRequest(SpringBootApiGatewayRequestHandler.java:43)"
]
}

Ganesh, I believe you have already raise the issue in Github of SCF. So as I stated there, we have recently did several enhancements, polished the sample and modified documentation by adding a Getting Started guide.
That said, with new generic request handler you no longer need to provide implementation of AWS request handler including SpringBootApiGatewayRequestHandler.
Simply write your boot application to contain function bean
#SpringBootApplication
public class FunctionConfiguration {
public static void main(String[] args) {
SpringApplication.run(FunctionConfiguration.class, args);
}
#Bean
public Function<String, String> uppercase() {
return value -> value.toUpperCase();
}
}
. . . and specify org.springframework.cloud.function.adapter.aws.FunctionInvoker as a handler in AWS dashboard. We'll do the rest for you.

Related

Spring 6: Spring Cloud Stream Kafka - Replacement for #EnableBinding

I was reading "Spring Microservices In Action (2021)" because I wanted to brush up on Microservices.
Now with Spring Boot 3 a few things changed. In the book, an easy example of how to push messages to a topic and how to consume messages to a topic were presented.
The Problem is: The examples presented do just not work with Spring Boot 3. Sending Messages from a Spring Boot 2 Project works. The underlying project can be found here:
https://github.com/ihuaylupo/manning-smia/tree/master/chapter10
Example 1 (organization-service):
Consider this Config:
spring.cloud.stream.bindings.output.destination=orgChangeTopic
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.kafka.binder.zkNodes=kafka #kafka is used as a network alias in docker-compose
spring.cloud.stream.kafka.binder.brokers=kafka
And this Component(Class) which can is injected in a service in this project
#Component
public class SimpleSourceBean {
private Source source;
private static final Logger logger = LoggerFactory.getLogger(SimpleSourceBean.class);
#Autowired
public SimpleSourceBean(Source source){
this.source = source;
}
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
source.output().send(MessageBuilder.withPayload(change).build());
}
}
This code fires a message to the topic (destination) orgChangeTopic. The way I understand it, the firsttime a message is fired, the topic is created.
Question 1: How do I do this Spring Boot 3? Config-Wise and "Code-Wise"?
Example 2:
Consider this config:
spring.cloud.stream.bindings.input.destination=orgChangeTopic
spring.cloud.stream.bindings.input.content-type=application/json
spring.cloud.stream.bindings.input.group=licensingGroup
spring.cloud.stream.kafka.binder.zkNodes=kafka
spring.cloud.stream.kafka.binder.brokers=kafka
And this code:
#SpringBootApplication
#RefreshScope
#EnableDiscoveryClient
#EnableFeignClients
#EnableEurekaClient
#EnableBinding(Sink.class)
public class LicenseServiceApplication {
public static void main(String[] args) {
SpringApplication.run(LicenseServiceApplication.class, args);
}
#StreamListener(Sink.INPUT)
public void loggerSink(OrganizationChangeModel orgChange) {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
}
What this method is supposed to do is to fire whenever a message is fired in orgChangeTopic, we want the method loggerSink to fire.
How do I do this in Spring Boot 3?
In Spring Cloud Stream 4.0.0 (the version used if you are using Boot 3), a few things are removed - such as the EnableBinding, StreamListener, etc. We deprecated them before in 3.x and finally removed them in the 4.0.0 version. The annotation-based programming model is removed in favor of the functional programming style enabled through the Spring Cloud Function project. You essentially express your business logic as java.util.function.Funciton|Consumer|Supplier etc. for a processor, sink, and source, respectively. For ad-hoc source situations, as in your first example, Spring Cloud Stream provides a StreamBridge API for custom sends.
Your example #1 can be re-written like this:
#Component
public class SimpleSourceBean {
#Autowired
StreamBridge streamBridge
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
streamBridge.send("output-out-0", MessageBuilder.withPayload(change).build());
}
}
Config
spring.cloud.stream.bindings.output-out-0.destination=orgChangeTopic
spring.cloud.stream.kafka.binder.brokers=kafka
Just so you know, you no longer need that zkNode property. Neither the content type since the framework auto-converts that for you.
StreamBridge send takes a binding name and the payload. The binding name can be anything - but for consistency reasons, we used output-out-0 here. Please read the reference docs for more context around the reasoning for this binding name.
If you have a simple source that runs on a timer, you can express this simply as a supplier as below (instead of using a StreamBrdige).
#Bean
public Supplier<OrganizationChangeModel> ouput() {
return () -> {
// return the payload
};
}
spring.cloud.function.definition=output
spring.cloud.bindings.output-out-0.destination=...
Example #2
#Bean
public Consumer<OrganizationChangeModel> loggerSink() {
return model -> {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
};
}
Config:
spring.cloud.function.definition=loggerSink
spring.cloud.stream.bindings.loggerSink-in-0.destination=orgChangeTopic
spring.cloud.stream.bindings.loggerSinnk-in-0.group=licensingGroup
spring.cloud.stream.kafka.binder.brokers=kafka
If you want the input/output binding names to be specifically input or output rather than with in-0, out-0 etc., there are ways to make that happen. Details for this are in the reference docs.

Load balancing problems with Spring Cloud Kubernetes

We have Spring Boot services running in Kubernetes and are using the Spring Cloud Kubernetes Load Balancer functionality with RestTemplate to make calls to other Spring Boot services. One of the main reasons we have this in place is historical - in that previously we ran our services in EC2 using Eureka for service discovery and after the migration we kept the Spring discovery client/client-side load balancing in place (updating dependencies etc for it to work with the Spring Cloud Kubernetes project)
We have a problem that when one of the target pods goes down we get multiple failures for requests for a period of time with java.net.NoRouteToHostException ie the spring load balancer is still trying to send to that pod.
So I have a few questions on this:
Shouldn't the target instance get removed automatically when this happens? So it might happen once but after that, the target pod list will be repaired?
Or if not is there some other configuration we need to add to handle this - eg retry / circuit breaker, etc?
A more general question is what benefit does Spring's client-side load balancing bring with Kubernetes? Without it, our service would still be able to call other services using Kubernetes built-in service / load-balancing functionality and this should handle the issue of pods going down automatically. The Spring documentation also talks about being able to switch from POD mode to SERVICE mode (https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#loadbalancer-for-kubernetes). But isn't this service mode just what Kubernetes does automatically? I'm wondering if the simplest solution here isn't to remove the Spring Load Balancer altogether? What would we lose then?
An update on this: we had the spring-retry dependency in place, but the retry was not working as by default it only works for GETs and most of our calls are POST (but OK to call again). Adding the configuration spring.cloud.loadbalancer.retry.retryOnAllOperations: true fixed this, and hence most of these failures should be avoided by the retry using an alternative instance on the second attempt.
We have also added a RetryListener that clears the load balancer cache for the service on certain connection exceptions:
#Configuration
public class RetryConfig {
private static final Logger logger = LoggerFactory.getLogger(RetryConfig.class);
// Need to use bean factory here as can't autowire LoadBalancerCacheManager -
// - it's set to 'autowireCandidate = false' in LoadBalancerCacheAutoConfiguration
#Autowired
private BeanFactory beanFactory;
#Bean
public CacheClearingLoadBalancedRetryFactory cacheClearingLoadBalancedRetryFactory(ReactiveLoadBalancer.Factory<ServiceInstance> loadBalancerFactory) {
return new CacheClearingLoadBalancedRetryFactory(loadBalancerFactory);
}
// Extension of the default bean that defines a retry listener
public class CacheClearingLoadBalancedRetryFactory extends BlockingLoadBalancedRetryFactory {
public CacheClearingLoadBalancedRetryFactory(ReactiveLoadBalancer.Factory<ServiceInstance> loadBalancerFactory) {
super(loadBalancerFactory);
}
#Override
public RetryListener[] createRetryListeners(String service) {
RetryListener cacheClearingRetryListener = new RetryListener() {
#Override
public <T, E extends Throwable> boolean open(RetryContext context, RetryCallback<T, E> callback) { return true; }
#Override
public <T, E extends Throwable> void close(RetryContext context, RetryCallback<T, E> callback, Throwable throwable) {}
#Override
public <T, E extends Throwable> void onError(RetryContext context, RetryCallback<T, E> callback, Throwable throwable) {
logger.warn("Retry for service {} picked up exception: context {}, throwable class {}", service, context, throwable.getClass());
if (throwable instanceof ConnectTimeoutException || throwable instanceof NoRouteToHostException) {
try {
LoadBalancerCacheManager loadBalancerCacheManager = beanFactory.getBean(LoadBalancerCacheManager.class);
Cache loadBalancerCache = loadBalancerCacheManager.getCache(CachingServiceInstanceListSupplier.SERVICE_INSTANCE_CACHE_NAME);
if (loadBalancerCache != null) {
boolean result = loadBalancerCache.evictIfPresent(service);
logger.warn("Load Balancer Cache evictIfPresent result for service {} is {}", service, result);
}
} catch(Exception e) {
logger.error("Failed to clear load balancer cache", e);
}
}
}
};
return new RetryListener[] { cacheClearingRetryListener };
}
}
}
Are there any issues with this approach? Could something like this be added to the built in functionality?
Shouldn't the target instance get removed automatically when this
happens? So it might happen once but after that the target pod list
will be repaired?
To resolve this issue you have to use the Readiness and Liveness Probe in Kubernetes.
Readiness will check the health of the endpoint that your application has, on the period of interval. If the application fails it will mark your PODs as Unready to accept the Traffic. So no traffic will go to that POD(replica).
Liveness will restart your application if it fails so your container or we can say POD will come up again and once we will get 200 response from app K8s will mark your POD as Ready to accept the traffic.
You can create the simple endpoint in the application that give response as 200 or 204 as per need.
Read more at : https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
Make sure you application using the Kubernetes service to talk with each other.
Application 1 > Kubernetes service of App 2 > Application 2 PODs
To enable load balancing based on Kubernetes Service name use the
following property. Then load balancer would try to call application
using address, for example service-a.default.svc.cluster.local
spring.cloud.kubernetes.loadbalancer.mode=SERVICE
The most typical way to use Spring Cloud LoadBalancer on Kubernetes is
with service discovery. If you have any DiscoveryClient on your
classpath, the default Spring Cloud LoadBalancer configuration uses it
to check for service instances. As a result, it only chooses from
instances that are up and running. All that is needed is to annotate
your Spring Boot application with #EnableDiscoveryClientto enable
K8s-native Service Discovery.
References : https://stackoverflow.com/a/68536834/5525824

Can I use Apache Camel in an AWS Lambda?

Apache Camel has a number of features which make event processing elegant and easy to code. It would be useful to be able to exploit this in an AWS Lambda.
Of course not all features are appropriate, especially anything requiring a long lived process.
Also managing persistant state, for example idempotent repositories and throttling would need thinkng about.
But it would be really useful in simple cases.
It turns out that this is simple using Redhat's Quarkus framework.
I've made a simple example: https://github.com/jcable/SampleCamelLambda
The Camel Route is trivial:
from("direct:input").to("log:input")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
InputObject input = exchange.getIn().getBody(InputObject.class);
String result = input.getGreeting() + " " + input.getName();
OutputObject out = new OutputObject();
out.setResult(result);
out.setRequestId("aws-request-1");
exchange.getIn().setBody(out);
}
});
Adapting the route to the Lambda makes use of a Quarkus RequestHandler.
public class Lambda implements RequestHandler<InputObject, OutputObject> {
#Inject
CamelContext camelContext;
#Override
public OutputObject handleRequest(InputObject input, Context context) {
return camelContext.createProducerTemplate().requestBody("direct:input", input, OutputObject.class);
}
}
CDI is used to inject the CamelContext into the request handler and then the camelContext object is used to create a
ProducerTemplate which can be used to invoke the Camel route.
The Maven project for the example is derived from the Quarkus lambda example with Apache Camel dependencies from the Camel Quarkus examples.

Spring boot actuator breaks #AutoConfigureMockRestServiceServer

I have a class which builds multiple RestTemplates using RestTemplateBuilder:
private RestTemplate build(RestTemplateBuilder restTemplateBuilder) {
return restTemplateBuilder
.rootUri("http://localhost:8080/rest")
.build();
}
For my test setup I use #AutoConfigureMockRestServiceServer and mock responses using MockServerRestTemplateCustomizer:
mockServerRestTemplateCustomizer.getServer()
.expect(ExpectedCount.times(2),
requestToUriTemplate("/some/path/{withParameters}", "withParameters"))
.andRespond(withSuccess());
My test passes when I uncomment the spring-boot-actuator dependency in my pom and fails in the other scenario with the following message.
Expected: /some/path/parameter
Actual: http://localhost:8080/rest/pos/some/path/withParameters
I noticed by debugging through MockServerRestTemplateCustomizer that spring-boot-actuator applies a "DelegateHttpClientInterceptor" for supporting their built in metrics for rest templates. However this creates a problem with the following code which I found in RootUriRequestExpectationManager:
public static RequestExpectationManager forRestTemplate(RestTemplate restTemplate,
RequestExpectationManager expectationManager) {
Assert.notNull(restTemplate, "RestTemplate must not be null");
UriTemplateHandler templateHandler = restTemplate.getUriTemplateHandler();
if (templateHandler instanceof RootUriTemplateHandler) {
return new RootUriRequestExpectationManager(((RootUriTemplateHandler) templateHandler).getRootUri(),
expectationManager);
}
return expectationManager;
}
Because as mentioned above spring-boot-actuator registers a "DelegateHttpClientInterceptor" which leads to the above code not recognizing the RootUriTemplateHandler and therefore not matching the request using requestToUriTemplate.
What am I missing here to get this working?
As Andy Wilkinson pointed out, this seems to be a bug in Spring boot. I created an issue with a sample project.

Server-side schema validation with JAX-WS

I have JAX-WS container-less service (published via Endpoint.publish() right from main() method). I want my service to validate input messages. I have tried following annotation: #SchemaValidation(handler=MyErrorHandler.class) and implemented an appropriate class. When I start the service, I get the following:
Exception in thread "main" javax.xml.ws.WebServiceException:
Annotation #com.sun.xml.internal.ws.developer.SchemaValidation(outbound=true,
inbound=true, handler=class mypackage.MyErrorHandler) is not recognizable,
atleast one constructor of class
com.sun.xml.internal.ws.developer.SchemaValidationFeature
should be marked with #FeatureConstructor
I have found few solutions on the internet, all of them imply the use of WebLogic container. I can't use container in my case, I need embedded service. Can I still use schema validation?
The #SchemaValidation annotation is not defined in the JAX-WS spec, but validation is left open. This means you need something more than only the classes in the jdk.
As long as you are able to add some jars to your classpath, you can set this up pretty easily using metro (which is also included in WebLogic. This is why you find solutions that use WebLogic as container.). To be more precise, you need to add two jars to your classpath. I'd suggest to
download the most recent metro release.
Unzip it somewhere.
Add the jaxb-api.jar and jaxws-api.jar to your classpath. You can do this for example by putting them into the JAVA_HOME/lib/endorsed or by manually adding them to your project. This largely depends on the IDE or whatever you are using.
Once you have done this, your MyErrorHandler should work even if it is deployed via Endpoint.publish(). At least I have this setup locally and it compiles and works.
If you are not able to modify your classpath and need validation, you will have to validate the request manually using JAXB.
Old question, but I solved the problem using the correct package and minimal configuration, as well using only provided services from WebLogic. I was hitting the same problem as you.
Just make sure you use correct java type as I described here.
As I am planning to expand to a tracking mechanism I also implemented the custom error handler.
Web Service with custom validation handler
import com.sun.xml.ws.developer.SchemaValidation;
#Stateless
#WebService(portName="ValidatedService")
#SchemaValidation(handler=MyValidator.class)
public class ValidatedService {
public ValidatedResponse operation(#WebParam(name = "ValidatedRequest") ValidatedRequest request) {
/* do business logic */
return response;
}
}
Custom Handler to log and store error in database
public class MyValidator extends ValidationErrorHandler{
private static java.util.logging.Logger log = LoggingHelper.getServerLogger();
#Override
public void warning(SAXParseException exception) throws SAXException {
handleException(exception);
}
#Override
public void error(SAXParseException exception) throws SAXException {
handleException(exception);
}
#Override
public void fatalError(SAXParseException exception) throws SAXException {
handleException(exception);
}
private void handleException(SAXParseException e) throws SAXException {
log.log(Level.SEVERE, "Validation error", e);
// Record in database for tracking etc
throw e;
}
}

Resources