I want to run a periodic task. In Spring MVC it works flawlessly.
Now I want to integrate Spring Webflux + Kotlin Coroutines.
How can I call suspended functions in the #Scheduled method? I want it to wait until the suspended function is finished.
/// This function starts every 00:10 UTC
#Scheduled(cron = "0 10 0 * * *", zone = "UTC")
fun myScheduler() {
// ???
}
suspend fun mySuspendedFunction() {
// business logic
}
fun myScheduler() {
runBlocking {
mySuspendedFunction()
}
}
This way coroutines will run in the thread that was blocked. If you need to run the code in a different thread or execute several coroutines in parallel, you can pass a dispatcher (e.g. Dispatchers.Default, Dispatchers.IO) to runBlocking() or use withContenxt().
Related
I have a requirement, where we want to asynchronously handle some upstream request/payload via coroutine. I see that there are several ways to do this, but wondering which is the right approach -
Provide explicit spring service class that implements CoroutineScope
Autowire singleton scope-context backed by certain defined thread-pool dispatcher.
Define method local CoroutineScope object
Following on this question, I'm wondering whats the trade-off if we define method local scopes like below -
fun testSuspensions(count: Int) {
val launchTime = measureTimeMillis {
val parentJob = CoroutineScope(Dispatchers.IO).launch {
repeat(count) {
this.launch {
process() //Some lone running process
}
}
}
}
}
Alternative approach to autowire explicit scope object backed by custom dispatcher -
#KafkaListener(
topics = ["test_topic"],
concurrency = "1",
containerFactory = "someListenerContainerConfig"
)
private fun testKafkaListener(consumerRecord: ConsumerRecord<String, ByteArray>, ack: Acknowledgment) {
try {
this.coroutineScope.launch {
consumeRecordAsync(consumerRecord)
}
} finally {
ack.acknowledge()
}
}
suspend fun consumeRecordAsync(record: ConsumerRecord<String, ByteArray>) {
println("[${Thread.currentThread().name}] Starting to consume record - ${record.key()}")
val statusCode = initiateIO(record) // Add error-handling depending on kafka topic commit semantics.
// Chain any-other business logic (depending on status-code) as suspending functions.
consumeStatusCode(record.key(), statusCode)
}
suspend fun initiateIO(record: ConsumerRecord<String, ByteArray>): Int {
return withContext(Dispatchers.IO) { // Switch context to IO thread for http.
println("[${Thread.currentThread().name}] Executing network call - ${record.key()}")
delay(1000 * 2) // Simulate IO call
200 // Return status-code
}
}
suspend fun consumeStatusCode(recordKey: String, statusCode: Int) {
delay(1000 * 1) // Simulate work.
println("[${Thread.currentThread().name}] consumed record - $recordKey, status-code - $statusCode")
}
Autowiring bean as follows in some upstream config class -
#Bean(name = ["testScope"])
fun defineExtensionScope(): CoroutineScope {
val threadCount: Int = 4
return CoroutineScope(Executors.newFixedThreadPool(threadCount).asCoroutineDispatcher())
}
It depends on what your goal is. If you just want to avoid the thread-per-request model, you can use Spring's support for suspend functions in controllers instead (by using webflux), and that removes the need from even using an external scope at all:
suspend fun testSuspensions(count: Int) {
val execTime = measureTimeMillis {
coroutineScope {
repeat(count) {
launch {
process() // some long running process
}
}
}
}
// all child coroutines are done at this point
}
If you really want your method to return immediately and schedule coroutines that outlive it, you indeed need that extra scope.
Regarding option 1), making custom classes implement CoroutineScope is not encouraged anymore (as far as I understood). It's usually suggested to use composition instead (declare a scope as a property instead of implementing the interface by your own classes). So I would suggest your option 2.
I would say option 3) is out of the question, because there is no point in using CoroutineScope(Dispatchers.IO).launch { ... }. It's no better than using GlobalScope.launch(Dispatchers.IO) { ... } (it has the same pitfalls) - you can read about the pitfalls of GlobalScope in its documentation.
The main problem being that you run your coroutines outside structured concurrency (your running coroutines are not children of a parent job and may accumulate and hold resources if they are not well behaved and you forget about them). In general it's better to define a scope that is cancelled when you no longer need any of the coroutines that are run by it, so you can clean rogue coroutines.
That said, in some circumstances you do need to run coroutines "forever" (for the whole life of your application). In that case it's ok to use GlobalScope, or a custom application-wide scope if you need to customize things like the thread pool or exception handler. But in any case don't create a scope on the spot just to launch a coroutine without keeping a handle to it.
In your case, it seems you have no clear moment when you wouldn't care about the long running coroutines anymore, so you may be ok with the fact that your coroutines can live forever and are never cancelled. In that case, I would suggest a custom application-wide scope that you would wire in your components.
Hi guys i'm trying to improve performance of some computation in my system. Basically I want to generate a series of actions based on some data. This doesn't scale well and I want to try doing this in parallel and getting a result after (a bit like how futures work)
I have an interface with a series of implementations that get a collection of actions. And want to call all these in parallel and await the results at the end.
The issue is that, when I view the logs its clearly doing this sequentially and waiting on each action getter before going to the next one. I thought the async would do this asynchronously, but its not.
The method the runBlocking is in, is within a spring transaction. Maybe that has something to do with it.
runBlocking {
val actions = actionsReportGetters.map { actionReportGetter ->
async {
getActions(actionReportGetter, abstractUser)
}
}.awaitAll().flatten()
allActions.addAll(actions)
}
private suspend fun getActions(actionReportGetter: ActionReportGetter, traderUser: TraderUser): List<Action> {
return actionReportGetter.getActions(traderUser)
}
interface ActionReportGetter {
fun getActions(traderUser: TraderUser): List<Action>
}
Looks like you are doing some blocking operation in ActionReportGetter.getActions in a single threaded environment (probably in the main thread).
For such IO operations you should launch your coroutines in Dispatchers.IO which provides a thread pool with multiple threads.
Update your code to this:
async(Dispatchers.IO) { // Switch to IO dispatcher
getActions(actionReportGetter, abstractUser
}
Also getActions need not be a suspending function here. You can remove the suspend modifier from it.
I need to call REST API (50 calls at a time )using async. Each call takes some time(20 seconds) to response and after that some processing happens. Then need to know when all threads are completed, once all threads are done again need to call that api with 50 Threads. How it can be achieved in spring.
Personally, I like Project reactor. It gives you a flexible way to construct the things that you're asking about.
Here is an example of the possible implementation:
#Async
#Scheduled(cron = "*/10 * * * * *")
public void job() {
log.info("Cron job has been triggered");
Flux<ResponseEntity<Void>> externalCall = Flux
.range(1, 5)
.parallel()
.runOn(elastic())
.flatMap(i -> {
log.info("Call external service {}", i);
return webClient.get().retrieve().toBodilessEntity();
})
.sequential();
Flux
.from(externalCall)
.thenMany(Flux.defer(
() -> {
log.info("First execution completed. Call external service one more time");
return externalCall;
})
)
.last()
.subscribe(i -> log.info("Execution completed"));
}
I also prepared a minimal project that you can find in my repository.
You can run the project using the following command : ./mvnw -pl async-scheduled-call-external-service clean spring-boot:run
The app does not use webflux and reactive programming, it use a normal CrudRepository to connect to a database that some time takes long time to respond, and it use the WebClient to perform requests to other services but using block() function to get the result in a synchronous way. I want to change the following code so both calls happen concurrently:
#Service class CustomerService(
val profileClient: WebClient,
val customerRepository: CustomerRepository
) {
fun getCustomer(id: String) : CustomerData {
val customer = customerRepository.findById(id)
val profile = profileClient.get().uri("/v1/profile/{id}", id)
.retrieve().bodyToMono<Profile>()
.block()
return CustomerData(customer, profile)
}
}
If the call to customerRepository.findById(id) takes lets say 20 millis, and the profileClient.get.. takes 50 millis, the overall takes 70 millis, while if I call both calls concurrently, it should take around 50 millis.
I cannot migrate the app to a fully reactive version with Webflux due it has a lot of code to migrate.
If you need concurrency, you may use kotlin's coroutines.
Your code would look like this:
fun getCustomer(id: String) : CustomerData = runBlocking {
val customer = async { customerRepository.findById(id) }
val profile = async { profileClient.get().uri("/v1/profile/{id}", id)
.retrieve().bodyToMono<Profile>()
.block() }
CustomerData(customer.await(), profile.await())
}
I have a Spring boot controller which makes two service calls. The second call should occur only after 10 secs, after getting response from first call.
public SomeResponse myAction() {
res = serviceCallA();
waitFor(10) {
serviceCallB();
}
return res;
}
The action doesn't have to wait for the response from serviceCallB(), to return response. Call to serviceCallB() just has to be triggered in separate thread.
Whats the best way to implement this? I need something like a ThreadPoolTaskExecutor, but with a delay.
Sample code would awesome..
Use a promise, not the horrible Thread.sleep from 1999 that wastes precious system resources. Your options are CompletableFuture, RxJava Publisher constructs, Spring's own Project Reactor.
Let serviceCallA return Mono<Something> (Project Reactor). Then:
res.delayElement(Duration.ofSeconds(10))
.doOnEach(unused -> serviceCallB())
.block();
There's probably 6 ways to do this in each library, the above being one.
Very straightforward answer;
SomeResponse myAction() {
res = serviceCallA();
serviceCallB();
return res;
}
#Async
void serviceCallB() {
Thread.sleep(10000) // 10 secs
// do service B call stuff
}
More on #Async with Spring also this
Beware though, since these calls will be running these serviceCallB() logic in new threads, and if used without proper control, might cause memory issues & kill your server.
With java.util.concurrent package you have the Executors
ScheduledExecutorService ex = Executors.newSingleThreadScheduledExecutor();
ex.schedule(() -> serviceCallB, 10, TimeUnit.SECONDS);