How to launch corutines in spring applications - spring

I have an app that consumes some messages from a message Queue and processes them. The processing is implemented as suspending functions and there is a service that will publish the events to a Channel<Event>, I have another service that will basically do:
for (event in channel) {
eventProcessor.process(event)
}
The problem is that this is also a suspending function, and I am really not sure what's the proper way to launch it within the context of Spring.
My initial solution was to do the following:
#Bean
fun myProcessor(eventProcessor: EventProcessor, channel: Channel<Event>): Job {
GlobalScope.launch {
eventProcessor.startProcessing(channel)
}
}
But it seems somehow hacky, and I am not sure what's the proper way to do it.

Launching anything on a GlobalScope is a really bad idea. You loose all advantages of structured concurrency this way.
Instead, make your EventProcessor implement CoroutineScope.
This will force you to specify coroutineContext, so you can use Dispatchers.Default:
override val coroutineContext = Dispatchers.Default
So, the full example will look something like this:
#SpringBootApplication
class SpringKotlinCoroutinesApplication {
#Bean
fun myProcessor(eventProcessor: EventProcessor, channel: Channel<Event>): Job {
return eventProcessor.startProcessing(channel)
}
#Bean
fun p() = EventProcessor()
#Bean
fun c() = Channel<Event>()
}
fun main(args: Array<String>) {
runApplication<SpringKotlinCoroutinesApplication>(*args)
}
class Event
class EventProcessor : CoroutineScope {
override val coroutineContext = Dispatchers.Default
fun startProcessing(channel: Channel<Event>) = launch {
for (e in channel) {
println(e)
}
}
}

Related

aggregator spring cloud stream with timeout

I want to make an application that receives messages, stores those messages in a list, and later with and schedule releases those messages every x amount of time.
I know spring cloud stream has an aggregator that already does this, but I think I need it to be done manually because I need to keep a unique message based upon a key and only replace the old message if it matches a specific condition ( I think of it as a Set aggregator with conditions)
what I have tried so far.
also in this link https://github.com/chalimbu/AggregatorQuestionStack
Processor.
import org.springframework.cloud.stream.annotation.EnableBinding
import org.springframework.cloud.stream.annotation.Input
import org.springframework.cloud.stream.annotation.Output
import org.springframework.cloud.stream.messaging.Processor
import org.springframework.scheduling.annotation.Scheduled
#EnableBinding(Processor::class)
class SetAggregatorProcessor(val storageService: StorageService) {
#Input
public fun inputMessage(input: Map<String,Any>){
storageService.messages.add(input)
}
#Output
#Scheduled(fixedDelay = 20000)
public fun produceOutput():List<Map<String,Any>>{
val message= storageService.messages
storageService.messages.clear()
return message;
}
}
Memory storage.
import org.springframework.stereotype.Service
#Service
class StorageService {
public var messages: MutableList<Map<String,Any>> = mutableListOf()
}
This code generates the following error when I start pushing messages.
Caused by: org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:139) ~[spring-integration-core-5.5.8.jar:5.5.8]
The idea is to deploy this app as part of the spring cloud stream (dataflow) platform.
I prefer the declarative approach(over the functional approach), but if somebody knows how to do it with the reactor way, I could settle for it.
Thanks for any help or advice.
thanks to this example(https://github.com/spring-cloud/spring-cloud-stream-samples/blob/main/processor-samples/sensor-average-reactive-kafka/src/main/java/sample/sensor/average/SensorAverageProcessorApplication.java) I was able to figure something out using flux in case someone else needs it
#Configuration
class SetAggregatorProcessor : Function<Flux<Map<String, Any>>, Flux<MutableList<Map<String, Any>>>> {
override fun apply(data: Flux<Map<String, Any>>):Flux<MutableList<Map<String, Any>>> {
return data.window(Duration.ofSeconds(20)).flatMap { window: Flux<Map<String, Any>> ->
this.aggregateList(window)
}
}
private fun aggregateList(group: Flux<Map<String, Any>>): Mono<MutableList<Map<String, Any>>>? {
return group.reduce(
mutableListOf(),
BiFunction<MutableList<Map<String, Any>>, Map<String, Any>, MutableList<Map<String, Any>>> {
acumulator: MutableList<Map<String, Any>>, element: Map<String, Any> ->
acumulator.add(element)
acumulator
}
)
}
}
update https://github.com/chalimbu/AggregatorQuestionStack/tree/main/src/main/kotlin/com/project/co/SetAggregator

getting started with kotlin and SpringBootApplication to run some suspend fun

Trying to run this repo with some suspend functions. Can someone please give some hints?
Let say we have one
suspend fun log(){
mLog.subscribeAlways<GroupMessageEvent> { event ->
if (event.message.content.contains("Error")) {
print("****")
} else if (event.message.content.contains("Warning")) {
print("Warning")
}
}
mLog.Listen()
}
How can we trigger this log from main
open class Application {
companion object {
#JvmStatic fun main(args: Array<String>) {
SpringApplication.run(Application::class.java, *args)
}
}
}
What have try, it can run without error, but it didn't work as expected,
call the log function from Controller class
class Controller {
#Value("\${spring.datasource.url}")
private var dbUrl: String? = null
#Autowired
lateinit private var dataSource: DataSource
#RequestMapping("/")
internal suspend fun index(): String {
mLog()
return "index"
}
Suspend functions should be called from a coroutine. There are several coroutine builder functions: launch, async, runBlocking.
Using these functions you can start a coroutine, for example:
runBlocking {
// call suspend methods
}
Coroutines launched with launch and async coroutine builders in a context of some CoroutineScope. They don't block the current thread. There are more info in the docs.
runBlocking blocks the current thread interruptibly until its completion.
Using launch coroutine builder you will not block the current thread if calling suspend function in it:
fun index(): String {
GlobalScope.launch {
log()
}
"index"
}
Note: in this case function index returns before log is executed. GlobalScope is discouraged to use. Application code usually should use an application-defined CoroutineScope. How to define a CoroutineScope is described in the docs and here and here.
If your intention is to block the current thread until suspend function completes use runBlocking coroutine builder:
fun index(): String = runBlocking {
log()
"index"
}
Some useful links: GlobalScope.launch vs runBlocking, Update UI async call with coroutines, Suspend function 'callGetApi' should be called only from a coroutine or another suspend function.

Spring stomp over websocket SubscribeMapping not working

I'm trying to configure subscription mapping for stomp over websockets in a spring boot application without any luck. I'm fairly certian I have the stomp/websocket stuff configured correctly as I am able to subscribe to topics that are being published to by a kafka consumer, but using the #SubscribeMapping is not working at all.
Here is my controller
#Controller
class TestController {
#SubscribeMapping("/topic/test")
fun testMapping(): String {
return "THIS IS A TEST"
}
}
And here is my configuration
#Configuration
#EnableWebSocketMessageBroker
#Order(Ordered.HIGHEST_PRECEDENCE + 99)
class WebSocketConfig : AbstractWebSocketMessageBrokerConfigurer() {
override fun configureMessageBroker(config: MessageBrokerRegistry) {
config.setApplicationDestinationPrefixes("/app", "/topic")
config.enableSimpleBroker("/queue", "/topic")
config.setUserDestinationPrefix("/user")
}
override fun registerStompEndpoints(registry:StompEndpointRegistry) {
registry.addEndpoint("/ws").setAllowedOrigins("*")
}
override fun configureClientInboundChannel(registration: ChannelRegistration?) {
registration?.setInterceptors(object: ChannelInterceptorAdapter() {
override fun preSend(message: Message<*>, channel: MessageChannel): Message<*> {
val accessor: StompHeaderAccessor = MessageHeaderAccessor.getAccessor(message, StompHeaderAccessor::class.java)
if (StompCommand.CONNECT.equals(accessor.command)) {
Optional.ofNullable(accessor.getNativeHeader("authorization")).ifPresent {
val token = it[0]
val keyReader = KeyReader()
val creds = Jwts.parser().setSigningKey(keyReader.key).parseClaimsJws(token).body
val groups = creds.get("groups", List::class.java)
val authorities = groups.map { SimpleGrantedAuthority(it as String) }
val authResult = UsernamePasswordAuthenticationToken(creds.subject, token, authorities)
SecurityContextHolder.getContext().authentication = authResult
accessor.user = authResult
}
}
return message
}
})
}
}
And then in the UI code, I'm using angular with a stompjs wrapper to subscribe to it like this:
this.stompService.subscribe('/topic/test')
.map(data => data.body)
.subscribe(data => console.log(data));
Subscribing like this to topics that I know are emitting data works perfectly but the subscribemapping does nothing. I've also tried adding an event listener to my websocket config to test that the UI is actually sending a subscription event to the back end like this:
#EventListener
fun handleSubscribeEvent(event: SessionSubscribeEvent) {
println("Subscription event: $event")
}
#EventListener
fun handleConnectEvent(event: SessionConnectEvent) {
println("Connection event: $event")
}
#EventListener
fun handleDisconnectEvent(event: SessionDisconnectEvent) {
println("Disconnection event: $event")
}
Adding these event listeners I can see that all the events that I'm expecting from the UI are coming through in the kotlin layer, but my controller method never gets called. Is there anything obvious that I'm missing?
Try the following:
#Controller
class TestController {
#SubscribeMapping("/test")
fun testMapping(): String {
return "THIS IS A TEST"
}
}

run PublishSubject on different thread rxJava

I am running RxJava and creating a subject to use onNext() method to produce data. I am using Spring.
This is my setup:
#Component
public class SubjectObserver {
private SerializedSubject<SomeObj, SomeObj> safeSource;
public SubjectObserver() {
safeSource = PublishSubject.<SomeObj>create().toSerialized();
**safeSource.subscribeOn(<my taskthreadExecutor>);**
**safeSource.observeOn(<my taskthreadExecutor>);**
safeSource.subscribe(new Subscriber<AsyncRemoteRequest>() {
#Override
public void onNext(AsyncRemoteRequest asyncRemoteRequest) {
LOGGER.debug("{} invoked.", Thread.currentThread().getName());
doSomething();
}
}
}
public void publish(SomeObj myObj) {
safeSource.onNext(myObj);
}
}
The way new data is generated on the RxJava stream is by #Autowire private SubjectObserver subjectObserver
and then calling subjectObserver.publish(newDataObjGenerated)
No matter what I specify for subscribeOn() & observeOn():
Schedulers.io()
Schedulers.computation()
my threads
Schedulers.newThread
The onNext() and the actual work inside it is done on the same thread that actually calls the onNext() on the subject to generate/produce data.
Is this correct? If so, what am I missing? I was expecting the doSomething() to be done on a different thread.
Update
In my calling class, if I change the way I am invoking the publish method, then of course a new thread is allocated for the subscriber to run on.
taskExecutor.execute(() -> subjectObserver.publish(newlyGeneratedObj));
Thanks,
Each operator on Observable/Subject return a new instance with the extra behavior, however, your code just applies the subscribeOn and observeOn then throws away whatever they produced and subscribes to the raw Subject. You should chain the method calls and then subscribe:
safeSource = PublishSubject.<AsyncRemoteRequest>create().toSerialized();
safeSource
.subscribeOn(<my taskthreadExecutor>)
.observeOn(<my taskthreadExecutor>)
.subscribe(new Subscriber<AsyncRemoteRequest>() {
#Override
public void onNext(AsyncRemoteRequest asyncRemoteRequest) {
LOGGER.debug("{} invoked.", Thread.currentThread().getName());
doSomething();
}
});
Note that subscribeOn has no practical effect on a PublishSubject because there is no subscription side-effect happening in its subscribe() method.

Does CompletableFuture have a corresponding Local context?

In the olden days, we had ThreadLocal for programs to carry data along with the request path since all request processing was done on that thread and stuff like Logback used this with MDC.put("requestId", getNewRequestId());
Then Scala and functional programming came along and Futures came along and with them came Local.scala (at least I know the twitter Futures have this class). Future.scala knows about Local.scala and transfers the context through all the map/flatMap, etc. etc. functionality such that I can still do Local.set("requestId", getNewRequestId()); and then downstream after it has travelled over many threads, I can still access it with Local.get(...)
Soooo, my question is in Java, can I do the same thing with the new CompletableFuture somewhere with LocalContext or some object (not sure of the name) and in this way, I can modify Logback MDC context to store it in that context instead of a ThreadLocal such that I don't lose the request id and all my logs across the thenApply, thenAccept, etc. etc. still work just fine with logging and the -XrequestId flag in Logback configuration.
EDIT:
As an example. If you have a request come in and you are using Log4j or Logback, in a filter, you will set MDC.put("requestId", requestId) and then in your app, you will log many log statements line this:
log.info("request came in for url="+url);
log.info("request is complete");
Now, in the log output it will show this:
INFO {time}: requestId425 request came in for url=/mypath
INFO {time}: requestId425 request is complete
This is using a trick of ThreadLocal to achieve this. At Twitter, we use Scala and Twitter Futures in Scala along with a Local.scala class. Local.scala and Future.scala are tied together in that we can achieve the above scenario still which is very nice and all our log statements can log the request id so the developer never has to remember to log the request id and you can trace through a single customers request response cycle with that id.
I don't see this in Java :( which is very unfortunate as there are many use cases for that. Perhaps there is something I am not seeing though?
If you come across this, just poke the thread here
http://mail.openjdk.java.net/pipermail/core-libs-dev/2017-May/047867.html
to implement something like twitter Futures which transfer Locals (Much like ThreadLocal but transfers state).
See the def respond() method in here and how it calls Locals.save() and Locals.restort()
https://github.com/simonratner/twitter-util/blob/master/util-core/src/main/scala/com/twitter/util/Future.scala
If Java Authors would fix this, then the MDC in logback would work across all 3rd party libraries. Until then, IT WILL NOT WORK unless you can change the 3rd party library(doubtful you can do that).
My solution theme would be to (It would work with JDK 9+ as a couple of overridable methods are exposed since that version)
Make the complete ecosystem aware of MDC
And for that, we need to address the following scenarios:
When all do we get new instances of CompletableFuture from within this class? → We need to return a MDC aware version of the same rather.
When all do we get new instances of CompletableFuture from outside this class? → We need to return a MDC aware version of the same rather.
Which executor is used when in CompletableFuture class? → In all circumstances, we need to make sure that all executors are MDC aware
For that, let's create a MDC aware version class of CompletableFuture by extending it. My version of that would look like below
import org.slf4j.MDC;
import java.util.Map;
import java.util.concurrent.*;
import java.util.function.Function;
import java.util.function.Supplier;
public class MDCAwareCompletableFuture<T> extends CompletableFuture<T> {
public static final ExecutorService MDC_AWARE_ASYNC_POOL = new MDCAwareForkJoinPool();
#Override
public CompletableFuture newIncompleteFuture() {
return new MDCAwareCompletableFuture();
}
#Override
public Executor defaultExecutor() {
return MDC_AWARE_ASYNC_POOL;
}
public static <T> CompletionStage<T> getMDCAwareCompletionStage(CompletableFuture<T> future) {
return new MDCAwareCompletableFuture<>()
.completeAsync(() -> null)
.thenCombineAsync(future, (aVoid, value) -> value);
}
public static <T> CompletionStage<T> getMDCHandledCompletionStage(CompletableFuture<T> future,
Function<Throwable, T> throwableFunction) {
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return getMDCAwareCompletionStage(future)
.handle((value, throwable) -> {
setMDCContext(contextMap);
if (throwable != null) {
return throwableFunction.apply(throwable);
}
return value;
});
}
}
The MDCAwareForkJoinPool class would look like (have skipped the methods with ForkJoinTask parameters for simplicity)
public class MDCAwareForkJoinPool extends ForkJoinPool {
//Override constructors which you need
#Override
public <T> ForkJoinTask<T> submit(Callable<T> task) {
return super.submit(MDCUtility.wrapWithMdcContext(task));
}
#Override
public <T> ForkJoinTask<T> submit(Runnable task, T result) {
return super.submit(wrapWithMdcContext(task), result);
}
#Override
public ForkJoinTask<?> submit(Runnable task) {
return super.submit(wrapWithMdcContext(task));
}
#Override
public void execute(Runnable task) {
super.execute(wrapWithMdcContext(task));
}
}
The utility methods to wrap would be such as
public static <T> Callable<T> wrapWithMdcContext(Callable<T> task) {
//save the current MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
setMDCContext(contextMap);
try {
return task.call();
} finally {
// once the task is complete, clear MDC
MDC.clear();
}
};
}
public static Runnable wrapWithMdcContext(Runnable task) {
//save the current MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
setMDCContext(contextMap);
try {
return task.run();
} finally {
// once the task is complete, clear MDC
MDC.clear();
}
};
}
public static void setMDCContext(Map<String, String> contextMap) {
MDC.clear();
if (contextMap != null) {
MDC.setContextMap(contextMap);
}
}
Below are some guidelines for usage:
Use the class MDCAwareCompletableFuture rather than the class CompletableFuture.
A couple of methods in the class CompletableFuture instantiates the self version such as new CompletableFuture.... For such methods (most of the public static methods), use an alternative method to get an instance of MDCAwareCompletableFuture. An example of using an alternative could be rather than using CompletableFuture.supplyAsync(...), you can choose new MDCAwareCompletableFuture<>().completeAsync(...)
Convert the instance of CompletableFuture to MDCAwareCompletableFuture by using the method getMDCAwareCompletionStage when you get stuck with one because of say some external library which returns you an instance of CompletableFuture. Obviously, you can't retain the context within that library but this method would still retain the context after your code hits the application code.
While supplying an executor as a parameter, make sure that it is MDC Aware such as MDCAwareForkJoinPool. You could create MDCAwareThreadPoolExecutor by overriding execute method as well to serve your use case. You get the idea!
You can find a detailed explanation of all of the above here in a post about the same.

Resources