I want to start a long-running database operation in a new thread. So the persistence context must be available but there is no return value (or the return value is not needed). Usually i do:
#Inject
MyRepository panachRepo;
new Thread(() -> {
panachRepo.cleanupDatabase();
});
how do i achieve this in quarkus?
#Inject
ManagedExecutor managedExecutor;
Then you can submit a task to it.
managedExecutor.execute(() -> methodToExecute());
Related
I am developing a spring boot application with activiti as the workflow engine. The activiti-spring-boot-starter dependency version is 7.1.0.M6 and spring-boot-starter-parent version is 2.6.7.
I have defined a BPMN 2.0 diagram using activiti-modelling-app and I am now starting the process instance. After completing a task, I want to access its task local variables when processing the next task. I am unable to figure out the api for it.
I tried using the historyService as below but with no luck. I get the result list as empty everytime with different apis (finished(), unfinished() etc)
HistoricTaskInstance acceptMobile = historyService.createHistoricTaskInstanceQuery()
.processInstanceId(processInstanceId)
.taskName("my-task1")
.singleResult();
Can someone guide me on what could be the right api to use to get the local variables of a previously completed task?
Thanks.
The best way to transfer variables between tasks is to use execution variables with DelegateExecution
execution variables are specific pointers to where the process is active, for more information, see apiVariables
Let say you have Task-A and Task-B with different listeners
here's how to use execution variable from Task-A to Task-B:
#Component("TaskListenerA")
public class TaskListenerA implements TaskListener {
#Override
public void notify(DelegateTask task) {
DelegateExecution execution = task.getExecution();
if("complete".equals(task.getEventName()) {
String myTaskVar = (String) task.getVariable("taskAvariable")
execution.setVariable("exeVariable", myTaskVar);
}
}
}
#Component("TaskListenerB")
public class TaskListenerB implements TaskListener {
#Override
public void notify(DelegateTask task) {
DelegateExecution execution = task.getExecution();
String myVariable = execution.get("exeVariable");
}
}
I'm creating a spring reactor application to consume messages from websockets server, transform them and later save them to redis and some sql database, saving to redis and sql database is also reactive. Also, before writing to redis and sql database, messages will be windowed (with different timespans) and aggregated.
I'm not sure if the way I've accomplished what I want to achieve is a proper reactive wise, it means, I'm not losing reactive benefits (performance).
First, let me show you what I got:
#Service
class WebSocketsConsumer {
public ConnectableFlux<String> webSocketFlux() {
return Flux.<String>create(emitter -> {
createWebSocketClient()
.execute(URI.create("wss://some-url-goes-here.com"), session -> {
WebSocketMessage initialMessage = session.textMessage("SOME_MSG_HERE");
Flux<String> flux = session.send(Mono.just(initialMessage))
.thenMany(session.receive())
.map(WebSocketMessage::getPayloadAsText)
.doOnNext(emitter::next);
Flux<String> sessionStatus = session.closeStatus()
.switchIfEmpty(Mono.just(CloseStatus.GOING_AWAY))
.map(CloseStatus::toString)
.doOnNext(emitter::next)
.flatMapMany(Flux::just);
return flux
.mergeWith(sessionStatus)
.then();
})
.subscribe(); //1: highlighted by Intellij Idea: `Calling subsribe in not blocking context`
})
.publish();
}
private ReactorNettyWebSocketClient createWebSocketClient() {
return new ReactorNettyWebSocketClient(
HttpClient.create(),
() -> WebsocketClientSpec.builder().maxFramePayloadLength(131072 * 100)
);
}
}
And
#Service
class WebSocketMessageDispatcher {
private final WebSocketsConsumer webSocketsConsumer;
private final Consumer<String> reactiveRedisConsumer;
private final Consumer<String> reactiveJdbcConsumer;
private Disposable webSocketsDisposable;
WebSocketMessageDispatcher(WebSocketsConsumer webSocketsConsumer, Consumer<String> redisConsumer, Consumer<String> dbConsumer) {
this.webSocketsConsumer = webSocketsConsumer;
this.reactiveRedisConsumer = redisConsumer;
this.reactiveJdbcConsumer = dbConsumer;
}
#EventListener(ApplicationReadyEvent.class)
public void onReady() {
ConnectableFlux<String> messages = webSocketsConsumer.webSocketFlux();
messages.subscribe(reactiveRedisConsumer);
messages.subscribe(reactiveJdbcConsumer);
webSocketsDisposable = messages.connect();
}
#PreDestroy
public void onDestroy() {
if (webSocketsDisposable != null) webSocketsDisposable.dispose();
}
}
Questions:
Is it a proper use of reactive streams? Maybe redis and database writes should be done in flatMap, however IMO they can't as I want them to happen in the background and they will also aggregate messages with different time windows. Also note comment 1 from the code above where idea lints my code, code works however I wonder what this lint may result in? Maybe I should use doOnNext not to call emitter::next but to invoke some dispatcher of messages there with some funcion like doOnNext(dispatcher::dispatchMessage) ?
I want websockets client to start immediately after application is ready and stop consuming messages when application shuts down, are #EventListener(ApplicationReadyEvent.class) and #PreDestroy annotations and code shown above a proper way to handle this scenario in reactive world?
As I said saving to redis and sql database is also reactive, i.e. those saves are also producing Mono<T> is subscribing to those Monos inside subscribe of websockets flux ok or it should be accomplished some other way (comments 2 and 3 in code above)
I am using activiti 5.18.
Behind the scenes : There are few task which are getting routed though a workflow. Some of these tasks are eligible for escalation. I have written my escalation listener as follows.
#Component
public class EscalationTimerListener implements ExecutionListener {
#Autowired
ExceptionWorkflowService exceptionWorkflowService;
#Override
public void notify(DelegateExecution execution) throws Exception {
//Process the escalated tasks here
this.exceptionWorkflowService.escalateWorkflowTask(execution);
}
}
Now when I start my tomcat server activiti framework internally calls the listener even before my entire spring context is loaded. Hence exceptionWorkflowService is null (since spring hasn't inejcted it yet!) and my code breaks.
Note : this scenario only occurs if my server isn't running at the escalation time of tasks and I start/restart my server post this time. If my server is already running during escalation time then the process runs smoothly. Because when server started it had injected the service and my listener has triggered later.
I have tried delaying activiti configuration using #DependsOn annotation so that it loads after ExceptionWorkflowService is initialized as below.
#Bean
#DependsOn({ "dataSource", "transactionManager","exceptionWorkflowService" })
public SpringProcessEngineConfiguration getConfiguration() {
final SpringProcessEngineConfiguration config = new SpringProcessEngineConfiguration();
config.setAsyncExecutorActivate(true);
config.setJobExecutorActivate(true);
config.setDataSource(this.dataSource);
config.setTransactionManager(this.transactionManager);
config.setDatabaseSchemaUpdate(this.schemaUpdate);
config.setHistory(this.history);
config.setTransactionsExternallyManaged(this.transactionsExternallyManaged);
config.setDatabaseType(this.dbType);
// Async Job Executor
final DefaultAsyncJobExecutor asyncExecutor = new DefaultAsyncJobExecutor();
asyncExecutor.setCorePoolSize(2);
asyncExecutor.setMaxPoolSize(50);
asyncExecutor.setQueueSize(100);
config.setAsyncExecutor(asyncExecutor);
return config;
}
But this gives circular reference error.
I have also tried adding a bean to SpringProcessEngineConfiguration as below.
Map<Object, Object> beanObjectMap = new HashMap<>();
beanObjectMap.put("exceptionWorkflowService", new ExceptionWorkflowServiceImpl());
config.setBeans(beanObjectMap);
and the access the same in my listener as :
Map<Object, Object> registeredBeans = Context.getProcessEngineConfiguration().getBeans();
ExceptionWorkflowService exceptionWorkflowService = (ExceptionWorkflowService) registeredBeans.get("exceptionWorkflowService");
exceptionWorkflowService.escalateWorkflowTask(execution);
This works but my repository has been autowired into my service which hasn't been initialized yet! So it again throws error in service layer :)
So is there a way that I can trigger escalation listeners only after my entire spring context is loaded?
Have you tried binding the class to ApplicationListener?
Not sure if it will work, but equally I'm not sure why your listener code is actually being executed on startup.
Try to set the implementation type of listeners using Java class or delegate expression and then in the class implement JavaDelegate instead of ExecutionListener.
In the olden days, we had ThreadLocal for programs to carry data along with the request path since all request processing was done on that thread and stuff like Logback used this with MDC.put("requestId", getNewRequestId());
Then Scala and functional programming came along and Futures came along and with them came Local.scala (at least I know the twitter Futures have this class). Future.scala knows about Local.scala and transfers the context through all the map/flatMap, etc. etc. functionality such that I can still do Local.set("requestId", getNewRequestId()); and then downstream after it has travelled over many threads, I can still access it with Local.get(...)
Soooo, my question is in Java, can I do the same thing with the new CompletableFuture somewhere with LocalContext or some object (not sure of the name) and in this way, I can modify Logback MDC context to store it in that context instead of a ThreadLocal such that I don't lose the request id and all my logs across the thenApply, thenAccept, etc. etc. still work just fine with logging and the -XrequestId flag in Logback configuration.
EDIT:
As an example. If you have a request come in and you are using Log4j or Logback, in a filter, you will set MDC.put("requestId", requestId) and then in your app, you will log many log statements line this:
log.info("request came in for url="+url);
log.info("request is complete");
Now, in the log output it will show this:
INFO {time}: requestId425 request came in for url=/mypath
INFO {time}: requestId425 request is complete
This is using a trick of ThreadLocal to achieve this. At Twitter, we use Scala and Twitter Futures in Scala along with a Local.scala class. Local.scala and Future.scala are tied together in that we can achieve the above scenario still which is very nice and all our log statements can log the request id so the developer never has to remember to log the request id and you can trace through a single customers request response cycle with that id.
I don't see this in Java :( which is very unfortunate as there are many use cases for that. Perhaps there is something I am not seeing though?
If you come across this, just poke the thread here
http://mail.openjdk.java.net/pipermail/core-libs-dev/2017-May/047867.html
to implement something like twitter Futures which transfer Locals (Much like ThreadLocal but transfers state).
See the def respond() method in here and how it calls Locals.save() and Locals.restort()
https://github.com/simonratner/twitter-util/blob/master/util-core/src/main/scala/com/twitter/util/Future.scala
If Java Authors would fix this, then the MDC in logback would work across all 3rd party libraries. Until then, IT WILL NOT WORK unless you can change the 3rd party library(doubtful you can do that).
My solution theme would be to (It would work with JDK 9+ as a couple of overridable methods are exposed since that version)
Make the complete ecosystem aware of MDC
And for that, we need to address the following scenarios:
When all do we get new instances of CompletableFuture from within this class? → We need to return a MDC aware version of the same rather.
When all do we get new instances of CompletableFuture from outside this class? → We need to return a MDC aware version of the same rather.
Which executor is used when in CompletableFuture class? → In all circumstances, we need to make sure that all executors are MDC aware
For that, let's create a MDC aware version class of CompletableFuture by extending it. My version of that would look like below
import org.slf4j.MDC;
import java.util.Map;
import java.util.concurrent.*;
import java.util.function.Function;
import java.util.function.Supplier;
public class MDCAwareCompletableFuture<T> extends CompletableFuture<T> {
public static final ExecutorService MDC_AWARE_ASYNC_POOL = new MDCAwareForkJoinPool();
#Override
public CompletableFuture newIncompleteFuture() {
return new MDCAwareCompletableFuture();
}
#Override
public Executor defaultExecutor() {
return MDC_AWARE_ASYNC_POOL;
}
public static <T> CompletionStage<T> getMDCAwareCompletionStage(CompletableFuture<T> future) {
return new MDCAwareCompletableFuture<>()
.completeAsync(() -> null)
.thenCombineAsync(future, (aVoid, value) -> value);
}
public static <T> CompletionStage<T> getMDCHandledCompletionStage(CompletableFuture<T> future,
Function<Throwable, T> throwableFunction) {
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return getMDCAwareCompletionStage(future)
.handle((value, throwable) -> {
setMDCContext(contextMap);
if (throwable != null) {
return throwableFunction.apply(throwable);
}
return value;
});
}
}
The MDCAwareForkJoinPool class would look like (have skipped the methods with ForkJoinTask parameters for simplicity)
public class MDCAwareForkJoinPool extends ForkJoinPool {
//Override constructors which you need
#Override
public <T> ForkJoinTask<T> submit(Callable<T> task) {
return super.submit(MDCUtility.wrapWithMdcContext(task));
}
#Override
public <T> ForkJoinTask<T> submit(Runnable task, T result) {
return super.submit(wrapWithMdcContext(task), result);
}
#Override
public ForkJoinTask<?> submit(Runnable task) {
return super.submit(wrapWithMdcContext(task));
}
#Override
public void execute(Runnable task) {
super.execute(wrapWithMdcContext(task));
}
}
The utility methods to wrap would be such as
public static <T> Callable<T> wrapWithMdcContext(Callable<T> task) {
//save the current MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
setMDCContext(contextMap);
try {
return task.call();
} finally {
// once the task is complete, clear MDC
MDC.clear();
}
};
}
public static Runnable wrapWithMdcContext(Runnable task) {
//save the current MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
setMDCContext(contextMap);
try {
return task.run();
} finally {
// once the task is complete, clear MDC
MDC.clear();
}
};
}
public static void setMDCContext(Map<String, String> contextMap) {
MDC.clear();
if (contextMap != null) {
MDC.setContextMap(contextMap);
}
}
Below are some guidelines for usage:
Use the class MDCAwareCompletableFuture rather than the class CompletableFuture.
A couple of methods in the class CompletableFuture instantiates the self version such as new CompletableFuture.... For such methods (most of the public static methods), use an alternative method to get an instance of MDCAwareCompletableFuture. An example of using an alternative could be rather than using CompletableFuture.supplyAsync(...), you can choose new MDCAwareCompletableFuture<>().completeAsync(...)
Convert the instance of CompletableFuture to MDCAwareCompletableFuture by using the method getMDCAwareCompletionStage when you get stuck with one because of say some external library which returns you an instance of CompletableFuture. Obviously, you can't retain the context within that library but this method would still retain the context after your code hits the application code.
While supplying an executor as a parameter, make sure that it is MDC Aware such as MDCAwareForkJoinPool. You could create MDCAwareThreadPoolExecutor by overriding execute method as well to serve your use case. You get the idea!
You can find a detailed explanation of all of the above here in a post about the same.
In my current web application, i use #RestController with CompletableFuture result for all services.
Database operations are asynchronous (CompletableFuture methods), but i would like commit operations only before send result
i would like to commit database modifications after --save-- asynchronous ended (--save-- is a list of future business)
#RestController
public class MyController {
...
#RequestMappping(...)
public CompletableFuture<ResponseEntity<AnyResource>> service(...){
CompletableFuture ...
.thenCompose(--check--)
.thenAsync(--save--)
...ect
.thenApply(
return ResponseEntity.ok().body(theResource);
);
}
}
-> i've tried with #Transactional, but it doesn't work (commit at the method's end but async method partially or not executed
-> Other way with programmatic :
#RequestMappping(...)
public CompletableFuture<ResponseEntity<AnyResource>> service(...){
DefaultTransactionDefinition def = new DefaultTransactionDefinition();
// explicitly setting the transaction name is something that can only be done programmatically
def.setName("SomeTxName");
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);
TransactionStatus status = this.platformTransactionManager.getTransaction(def);
CompletableFuture ...
.thenCompose(--check--)
.thenAsync(--save--)
...ect
.thenApply(
this.platformTransactionManager.commit(status)
return ResponseEntity.ok().body(theResource);
);
}
An error occured "Cannot deactivate transaction synchronization - not active", supposed because not the same thread.
Is there a proper way, to use transactional with CompletableFuture ?