Using DynamoDBEnhancedAsyncClient to scan and fetch futureObject - spring-boot

I am trying to use v2 library to persist & retrieve data in non-blocking manner.
Put method of DynamoDBEnhancedAsyncClient returns CompletableFuture object but scan and query methods return PagePublisher object - that tends to tell me that this is a blocking call. Can someone please help me understand/fix this. I want to implement end-to-end non-blocking calls. I tried with DynamoAsyncClient and that works perfect but I want to get rid of manually mapping of objects using DynamoDBEnhancedAsyncClient*, but I see no method that returns CompletableFutures.
Here is my code block
DynamoDbAsyncTable<User> asyncTable = dynamoDBEnhancedAsyncClient.table("userTable", TableSchema.fromBeab(User.class));
Map<String, AttribiuteValue> expVal = new HashMap();
expVal.put(":val", AttributeValue.builder().n(String.valueOf(userId)).build());
Expression exp = Expression.builder().expression("userId = :val").expressionValues(expVal).build();
ScanEnhancedRequest req = ScanEnhancedRequest.builder().filterExpression(exp).build();
PagePublisher<User> pagePublisher = asyncTable.scan(req);
Dependencies I used
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>dynamodb</artifactid>
<version>2.10.76</version>
</dependency>
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>dynamodb-enhanced</artifactid>
<version>2.12.0</version>
</dependency>

AWS SDK v2 leverages reactive streams to build its asynchronous functions.
PagePublisher<T> won't make your call blocking, this class implements the Publisher(doc) interface which allows you to subscribe on it.
Option1
According to your question which you would like to transfer Publisher to CompletableFuture, here is a rough example of how to do it:
var publisher = asyncTable.scan(req);
var future = new CompletableFuture<Page<User>>();
publisher.subscribe(
new Subscriber<>() {
#Override
public void onSubscribe(Subscription s) {
s.request(1);
}
#Override
public void onNext(Page<User> userPage) {
future.complete(userPage);
}
#Override
public void onError(Throwable t) {
future.completeExceptionally(t);
}
#Override
public void onComplete() {
future.complete(null);
}
});
var result = future.join();
Option2 (Recommended)
However, I saw you tagged this question with spring-boot and you mention that you want to implement nonblocking end to end calls.
I highly recommend you to integrate Spring Webflux with AWS SDK v2, which makes you to create a nonblocking/reactive web service easier.
By adopting Spring Webflux, you can integrate your code like:
Mono.from(asyncTable.scan(req))
which makes the code cleaner and simpler.

Related

Can I use Apache Camel in an AWS Lambda?

Apache Camel has a number of features which make event processing elegant and easy to code. It would be useful to be able to exploit this in an AWS Lambda.
Of course not all features are appropriate, especially anything requiring a long lived process.
Also managing persistant state, for example idempotent repositories and throttling would need thinkng about.
But it would be really useful in simple cases.
It turns out that this is simple using Redhat's Quarkus framework.
I've made a simple example: https://github.com/jcable/SampleCamelLambda
The Camel Route is trivial:
from("direct:input").to("log:input")
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
InputObject input = exchange.getIn().getBody(InputObject.class);
String result = input.getGreeting() + " " + input.getName();
OutputObject out = new OutputObject();
out.setResult(result);
out.setRequestId("aws-request-1");
exchange.getIn().setBody(out);
}
});
Adapting the route to the Lambda makes use of a Quarkus RequestHandler.
public class Lambda implements RequestHandler<InputObject, OutputObject> {
#Inject
CamelContext camelContext;
#Override
public OutputObject handleRequest(InputObject input, Context context) {
return camelContext.createProducerTemplate().requestBody("direct:input", input, OutputObject.class);
}
}
CDI is used to inject the CamelContext into the request handler and then the camelContext object is used to create a
ProducerTemplate which can be used to invoke the Camel route.
The Maven project for the example is derived from the Quarkus lambda example with Apache Camel dependencies from the Camel Quarkus examples.

How to mask sensitive information while logging in spring integration framework

I have requirement to mask sensitive information while logging. We are using wire-tap provided by integration framework for logging and we have many interfaces already designed which logs using wire-tap. We are currently using spring boot 2.1 and spring integration.
I hope that all your integration flows log via the mentioned global single wire-tap.
This one is just a start from another integration flow anyway: it is not just for a channel and logger on it. You really can build a wire-tapped flow any complexity.
My point is that you can add a transformer before logging-channel-adapter and mask a payload and/or headers any required way. The logger will receive already masked data.
Another way is to use some masking functionality in the log-expression. You may call here some bean for masking or a static utility: https://docs.spring.io/spring-integration/reference/html/#logging-channel-adapter
Don't know if this is a fancy approach, but I ended up implementing some sort of "error message filter" to mask headers in case the sensitive one is present (this can be extended to multiple header names, but this gives the idea):
#Component
public class ErrorMessageFilter {
private static final String SENSITIVE_HEADER_NAME = "sensitive_header";
public Throwable filterErrorMessage(Throwable payload) {
if (payload instanceof MessagingException) {
Message<?> failedMessage = ((MessagingException) payload).getFailedMessage();
if (failedMessage != null && failedMessage.getHeaders().containsKey(SENSITIVE_HEADER_NAME)) {
MessageHeaderAccessor headerAccessor = new MessageHeaderAccessor(failedMessage);
headerAccessor.setHeader(SENSITIVE_HEADER_NAME, "XXX");
return new MessagingException(withPayload(failedMessage.getPayload()).setHeaders(headerAccessor)
.build());
}
}
return payload;
}
}
Then, in the #Configuration class, added a way to wire my filter with Spring Integration's LoggingHandler:
#Autowired
public void setLoggingHandlerLogExpression(LoggingHandler loggingHandler, ErrorMessageFilter messageFilter) {
loggingHandler.setLogExpression(new FunctionExpression<Message<?>>((m) -> {
if (m instanceof ErrorMessage) {
return messageFilter.filterErrorMessage(((ErrorMessage) m).getPayload());
}
return m.getPayload();
}));
}
This also gave me the flexibility to reuse my filter in other components where I handle error messages (e.g.: send error notifications to Zabbix, etc.).
P.S.: sorry about all the instanceof and ifs, but at certain layer dirty code has to start.

Spring and Azure function

Does Spring work with Azure functions?
For example: Rest API that the code inside uses "Autowired" annotation (After running mvn azure-functions:run I've got NullPointerException on "myScriptService").
import java.util.*;
import com.microsoft.azure.serverless.functions.annotation.*;
import com.microsoft.azure.serverless.functions.*;
import com.company.ScriptService;
import org.springframework.beans.factory.annotation.Autowired;
/**
* Azure Functions with HTTP Trigger.
*/
public class Function {
#Autowired
ScriptService myScriptService;
/**
* This function listens at endpoint "/api/hello". Two ways to invoke it using "curl" command in bash:
* 1. curl -d "HTTP Body" {your host}/api/hello
* 2. curl {your host}/api/hello?name=HTTP%20Query
*/
#FunctionName("myhello")
public HttpResponseMessage<String> hello(
#HttpTrigger(name = "req",
methods = "post",
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
// Parse query parameter
String query = request.getQueryParameters().get("name");
String name = request.getBody().orElse(query);
if (name == null) {
return request.createResponse(400, "Please pass a name on the query string or in the request body");
} else {
return request.createResponse(200, "Hello, " + name + ", myScriptService.isEnabled(): " + myScriptService.isEnabled());
}
}
}
As some asked for a solution in the comments above, I'm assuming that this problem might be of relevance for other users, too.
So I think Spring Cloud Function is the magic word here: besides some other points (see the project page for details), it aims to enable Spring Boot features (like dependency injection, what you're looking for) on serverless providers (besides Azure Functions, also AWS Lambda and Apache OpenWhisk are supported).
So you have to make some modifications to your project:
Add the spring-cloud-function-adapter-azure dependency:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-azure</artifactId>
<version>2.0.1.RELEASE</version>
</dependency>
Your handler class needs some additional code:
Add the #SpringBootApplication annotation
Add the main() method known from Spring Boot applications
Make sure that Spring can find your ScriptService class e. g. by using the #ComponentScan annotation
It should look like this:
#SpringBootApplication
#ComponentScan(basePackages = { "package.of.scriptservice" })
public class Function {
#Autowired
ScriptService myScriptService;
#FunctionName("myhello")
public HttpResponseMessage<String> hello(
#HttpTrigger(name = "req", methods = "post", authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
final ExecutionContext context) {
// Your code here
}
public static void main(String[] args) {
SpringApplication.run(DemoFunctionHandler.class, args);
}
}
You can find a full example here and here
It looks like that there are a lot of changes between spring cloud v1 and v2. Have a quick look at this blog post: https://spring.io/blog/2018/09/25/spring-cloud-function-2-0-and-azure-functions
If you build your project like the example, spring will create the spring boot context when the azure function is called (and you call handleRequest). But the spring context is not available before this.
Do you add your package to scan for spring cloud function ?
spring.cloud.function.scan.packages="yourPackage"
It is to add in your application.properties

How publish event for more instance from command side axon

I tried to implement application with cqrs and event sourcing with axon framework. I implement command side and query part as a separate micro-service and replicate(scale up) query micro-service. I use message broker as RabbitMq. If the command part publish event that not update all query micro-service. It work as round robin way. how can i update all micro-services same time.
Here is my dependency file
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
<groupId>org.axonframework</groupId>
<artifactId>axon-amqp</artifactId>
<version>${axon.version}</version>
</dependency>
<dependency>
<groupId>org.axonframework</groupId>
<artifactId>axon-spring-boot-starter</artifactId>
<version>${axon.version}</version>
</dependency>
this is my configs in command side
#Bean
public Exchange exchange() {
return ExchangeBuilder.fanoutExchange("SeatReserveEvents").build();
}
#Bean
public Queue queue() {
return QueueBuilder.durable("SeatReserveEvents").build();
}
#Bean
public Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("*").noargs();
}
#Autowired
public void configure(AmqpAdmin admin) {
admin.declareExchange(exchange());
admin.declareQueue(queue());
admin.declareBinding(binding());
}
This is application.yml
axon:
amqp:
exchange: SeatReserveEvents
This is command side configurations
#Bean
public SpringAMQPMessageSource statisticsQueue(Serializer serializer) {
return new SpringAMQPMessageSource(new DefaultAMQPMessageConverter(serializer)) {
#RabbitListener(queues = "SeatReserveEvents")
#Override
public void onMessage(Message arg0, Channel arg1) throws Exception {
super.onMessage(arg0, arg1);
}
};
}
this is handler
#Component
#ProcessingGroup("statistics")
public class EventLoggingHandler
{
#EventHandler
protected void on(SeatResurvationCreateEvent event) {
System.err.println(event);
}
#EventHandler
protected void on(SeatReservationUpdateEvent event) {
System.err.println(event);
}
}
this is application.yml
axon:
eventhandling:
processors:
statistics.source: statisticsQueue
I'd say this is more an AMQP/RabbitMQ configuration setting than an Axon Framework specific question. That said, you'd want to set up RabbitMQ to not do Round Robin, but Pub/Sub, like described in this tutorial here.
I do however have another, more Axon Framework specific response in mind.
Why immediately publish your events on a queue, if you could also pull the events from the store directly? So, you'd have TrackingEventProcessors on the Query Side of you application, which pull events from the event store as they get appended by the Command Side of your application.
That's how a monolith version of an Axon Framework application incorporating CQRS would initially look like any way. Hence the simplest next step to split up that CQRS application in a Command and Query side, would be to leave the way of receiving events as is, without adding the queue in between.
If you've got specific requirements to publish over a queue however, or you just prefer to use a queue instead of letting the Query applications pull from the Event Store directly, please disregard this comment and revert back to the RabbitMQ tutorial.
we need to change RabbitMq configuration to publish event for more instance from command side axon. For that we have to change configuration in publisher side as below.
#Bean
public FanoutExchange fanoutExchange() {
FanoutExchange exchange = new FanoutExchange("SeatReserveEvents");
return exchange;
}
#Autowired
public void configure(AmqpAdmin admin) {
admin.declareExchange(fanoutExchange());
}
and next thing is subscriber side we have to change bean like below
#Bean
public SpringAMQPMessageSource statisticsQueue(Serializer serializer) {
return new SpringAMQPMessageSource(new DefaultAMQPMessageConverter(serializer)) {
#RabbitListener(bindings = #QueueBinding(
value = #Queue,
exchange = #Exchange(value ="SeatReserveEvents",type = ExchangeTypes.FANOUT),
key = "orderRoutingKey")
)
#Override
public void onMessage(Message arg0, Channel arg1) throws Exception {
super.onMessage(arg0, arg1);
}
};
}
now we can replicate consumer for more instance. This pattern is publisher/subscriber pattern. and exchange type is fanout

Does CompletableFuture have a corresponding Local context?

In the olden days, we had ThreadLocal for programs to carry data along with the request path since all request processing was done on that thread and stuff like Logback used this with MDC.put("requestId", getNewRequestId());
Then Scala and functional programming came along and Futures came along and with them came Local.scala (at least I know the twitter Futures have this class). Future.scala knows about Local.scala and transfers the context through all the map/flatMap, etc. etc. functionality such that I can still do Local.set("requestId", getNewRequestId()); and then downstream after it has travelled over many threads, I can still access it with Local.get(...)
Soooo, my question is in Java, can I do the same thing with the new CompletableFuture somewhere with LocalContext or some object (not sure of the name) and in this way, I can modify Logback MDC context to store it in that context instead of a ThreadLocal such that I don't lose the request id and all my logs across the thenApply, thenAccept, etc. etc. still work just fine with logging and the -XrequestId flag in Logback configuration.
EDIT:
As an example. If you have a request come in and you are using Log4j or Logback, in a filter, you will set MDC.put("requestId", requestId) and then in your app, you will log many log statements line this:
log.info("request came in for url="+url);
log.info("request is complete");
Now, in the log output it will show this:
INFO {time}: requestId425 request came in for url=/mypath
INFO {time}: requestId425 request is complete
This is using a trick of ThreadLocal to achieve this. At Twitter, we use Scala and Twitter Futures in Scala along with a Local.scala class. Local.scala and Future.scala are tied together in that we can achieve the above scenario still which is very nice and all our log statements can log the request id so the developer never has to remember to log the request id and you can trace through a single customers request response cycle with that id.
I don't see this in Java :( which is very unfortunate as there are many use cases for that. Perhaps there is something I am not seeing though?
If you come across this, just poke the thread here
http://mail.openjdk.java.net/pipermail/core-libs-dev/2017-May/047867.html
to implement something like twitter Futures which transfer Locals (Much like ThreadLocal but transfers state).
See the def respond() method in here and how it calls Locals.save() and Locals.restort()
https://github.com/simonratner/twitter-util/blob/master/util-core/src/main/scala/com/twitter/util/Future.scala
If Java Authors would fix this, then the MDC in logback would work across all 3rd party libraries. Until then, IT WILL NOT WORK unless you can change the 3rd party library(doubtful you can do that).
My solution theme would be to (It would work with JDK 9+ as a couple of overridable methods are exposed since that version)
Make the complete ecosystem aware of MDC
And for that, we need to address the following scenarios:
When all do we get new instances of CompletableFuture from within this class? → We need to return a MDC aware version of the same rather.
When all do we get new instances of CompletableFuture from outside this class? → We need to return a MDC aware version of the same rather.
Which executor is used when in CompletableFuture class? → In all circumstances, we need to make sure that all executors are MDC aware
For that, let's create a MDC aware version class of CompletableFuture by extending it. My version of that would look like below
import org.slf4j.MDC;
import java.util.Map;
import java.util.concurrent.*;
import java.util.function.Function;
import java.util.function.Supplier;
public class MDCAwareCompletableFuture<T> extends CompletableFuture<T> {
public static final ExecutorService MDC_AWARE_ASYNC_POOL = new MDCAwareForkJoinPool();
#Override
public CompletableFuture newIncompleteFuture() {
return new MDCAwareCompletableFuture();
}
#Override
public Executor defaultExecutor() {
return MDC_AWARE_ASYNC_POOL;
}
public static <T> CompletionStage<T> getMDCAwareCompletionStage(CompletableFuture<T> future) {
return new MDCAwareCompletableFuture<>()
.completeAsync(() -> null)
.thenCombineAsync(future, (aVoid, value) -> value);
}
public static <T> CompletionStage<T> getMDCHandledCompletionStage(CompletableFuture<T> future,
Function<Throwable, T> throwableFunction) {
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return getMDCAwareCompletionStage(future)
.handle((value, throwable) -> {
setMDCContext(contextMap);
if (throwable != null) {
return throwableFunction.apply(throwable);
}
return value;
});
}
}
The MDCAwareForkJoinPool class would look like (have skipped the methods with ForkJoinTask parameters for simplicity)
public class MDCAwareForkJoinPool extends ForkJoinPool {
//Override constructors which you need
#Override
public <T> ForkJoinTask<T> submit(Callable<T> task) {
return super.submit(MDCUtility.wrapWithMdcContext(task));
}
#Override
public <T> ForkJoinTask<T> submit(Runnable task, T result) {
return super.submit(wrapWithMdcContext(task), result);
}
#Override
public ForkJoinTask<?> submit(Runnable task) {
return super.submit(wrapWithMdcContext(task));
}
#Override
public void execute(Runnable task) {
super.execute(wrapWithMdcContext(task));
}
}
The utility methods to wrap would be such as
public static <T> Callable<T> wrapWithMdcContext(Callable<T> task) {
//save the current MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
setMDCContext(contextMap);
try {
return task.call();
} finally {
// once the task is complete, clear MDC
MDC.clear();
}
};
}
public static Runnable wrapWithMdcContext(Runnable task) {
//save the current MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
setMDCContext(contextMap);
try {
return task.run();
} finally {
// once the task is complete, clear MDC
MDC.clear();
}
};
}
public static void setMDCContext(Map<String, String> contextMap) {
MDC.clear();
if (contextMap != null) {
MDC.setContextMap(contextMap);
}
}
Below are some guidelines for usage:
Use the class MDCAwareCompletableFuture rather than the class CompletableFuture.
A couple of methods in the class CompletableFuture instantiates the self version such as new CompletableFuture.... For such methods (most of the public static methods), use an alternative method to get an instance of MDCAwareCompletableFuture. An example of using an alternative could be rather than using CompletableFuture.supplyAsync(...), you can choose new MDCAwareCompletableFuture<>().completeAsync(...)
Convert the instance of CompletableFuture to MDCAwareCompletableFuture by using the method getMDCAwareCompletionStage when you get stuck with one because of say some external library which returns you an instance of CompletableFuture. Obviously, you can't retain the context within that library but this method would still retain the context after your code hits the application code.
While supplying an executor as a parameter, make sure that it is MDC Aware such as MDCAwareForkJoinPool. You could create MDCAwareThreadPoolExecutor by overriding execute method as well to serve your use case. You get the idea!
You can find a detailed explanation of all of the above here in a post about the same.

Resources