Kotlin coroutines and 'suspending functions' make it easy for the programmer to await the result of I/O without stopping the thread (the thread is given other work to do until the I/O is complete).
jOOQ is a Java-first product for writing and executing SQL in a typesafe way, but does not itself make explicit use of Kotlin coroutines.
Can jOOQ be called from a Kotlin co-routine scope to get the nice-to-write and thread-is-productive-even-during-IO benefits?
suspend fun myQuery() {
return dsl.select()
// .etc()
.fetch() // <- could this be a 'suspend' call?
}
A: Yes.
BECAUSE
org.jooq.ResultQuery<R extends Record> has a #NotNull CompletionStage<Result<R>> fetchAsync()` method which hooks into Java's (JDK 8+) machinery for 'futures'.
AND
kotlinx-coroutines-jdk8 provides a bunch of extension methods for adapting between Kotlin suspending functions and JDK 8+ futures.
THEREFORE we can do:
import kotlinx.coroutines.future.await
...
suspend fun myQuery() {
return dsl.select()
//.etc()
.fetchAsync()
.await() // <- IS a suspending call !!!
}
It should be noted that there are a bunch of overloaded fetchX() methods on ResultQuery which provide lots of utility to synchronous calls, but they are not similarly overloaded for fetchAsync(). That's where the Kotlin programmer may wish to be familiar with the Java futures machinery: any sort of manipulation can be accomplished asynchronously using the thenApply {} method on CompletionStage<T>. For example, mapping the results:
suspend fun myQuery() {
return dsl.select()
//.etc()
.fetchAsync()
.thenApply { it.map(mapping(::Film)) } // <- done as part of the 'suspend'
.await()
}
although it should be fine to do it after the suspend:
suspend fun myQuery() {
val records = dsl.select()
//.etc()
.fetchAsync()
.await()
return records.map(mapping(::Film))
}
JOOQ added R2dbc support, the ResultQuery is a ReactiveStreams Publisher.
Currently the simplest approach is using Reactor Kotlin extension to convert the Reactor APIs to Kotlin Coroutines APIs.
Flux.from(
ctx.select()
...//etc
...// do not call fetch
)
.asFlow()
Update: Jooq guys cancelled the PR of Kotlin Coroutines on DslContext, so now you have to use kotlinx-coroutines-ractor as above.
When switching to 3.17 or later, use fetchAwait directly. There is a series of xxxAwait added.
ctx.select()
...//etc
.fetchAwait()
but I did not find direct support for Kotlin Coroutines,**Jooq 3.17 ships official Koltin Coroutines support in `jooq-kotlin` module**.
Related
I have a requirement, where we want to asynchronously handle some upstream request/payload via coroutine. I see that there are several ways to do this, but wondering which is the right approach -
Provide explicit spring service class that implements CoroutineScope
Autowire singleton scope-context backed by certain defined thread-pool dispatcher.
Define method local CoroutineScope object
Following on this question, I'm wondering whats the trade-off if we define method local scopes like below -
fun testSuspensions(count: Int) {
val launchTime = measureTimeMillis {
val parentJob = CoroutineScope(Dispatchers.IO).launch {
repeat(count) {
this.launch {
process() //Some lone running process
}
}
}
}
}
Alternative approach to autowire explicit scope object backed by custom dispatcher -
#KafkaListener(
topics = ["test_topic"],
concurrency = "1",
containerFactory = "someListenerContainerConfig"
)
private fun testKafkaListener(consumerRecord: ConsumerRecord<String, ByteArray>, ack: Acknowledgment) {
try {
this.coroutineScope.launch {
consumeRecordAsync(consumerRecord)
}
} finally {
ack.acknowledge()
}
}
suspend fun consumeRecordAsync(record: ConsumerRecord<String, ByteArray>) {
println("[${Thread.currentThread().name}] Starting to consume record - ${record.key()}")
val statusCode = initiateIO(record) // Add error-handling depending on kafka topic commit semantics.
// Chain any-other business logic (depending on status-code) as suspending functions.
consumeStatusCode(record.key(), statusCode)
}
suspend fun initiateIO(record: ConsumerRecord<String, ByteArray>): Int {
return withContext(Dispatchers.IO) { // Switch context to IO thread for http.
println("[${Thread.currentThread().name}] Executing network call - ${record.key()}")
delay(1000 * 2) // Simulate IO call
200 // Return status-code
}
}
suspend fun consumeStatusCode(recordKey: String, statusCode: Int) {
delay(1000 * 1) // Simulate work.
println("[${Thread.currentThread().name}] consumed record - $recordKey, status-code - $statusCode")
}
Autowiring bean as follows in some upstream config class -
#Bean(name = ["testScope"])
fun defineExtensionScope(): CoroutineScope {
val threadCount: Int = 4
return CoroutineScope(Executors.newFixedThreadPool(threadCount).asCoroutineDispatcher())
}
It depends on what your goal is. If you just want to avoid the thread-per-request model, you can use Spring's support for suspend functions in controllers instead (by using webflux), and that removes the need from even using an external scope at all:
suspend fun testSuspensions(count: Int) {
val execTime = measureTimeMillis {
coroutineScope {
repeat(count) {
launch {
process() // some long running process
}
}
}
}
// all child coroutines are done at this point
}
If you really want your method to return immediately and schedule coroutines that outlive it, you indeed need that extra scope.
Regarding option 1), making custom classes implement CoroutineScope is not encouraged anymore (as far as I understood). It's usually suggested to use composition instead (declare a scope as a property instead of implementing the interface by your own classes). So I would suggest your option 2.
I would say option 3) is out of the question, because there is no point in using CoroutineScope(Dispatchers.IO).launch { ... }. It's no better than using GlobalScope.launch(Dispatchers.IO) { ... } (it has the same pitfalls) - you can read about the pitfalls of GlobalScope in its documentation.
The main problem being that you run your coroutines outside structured concurrency (your running coroutines are not children of a parent job and may accumulate and hold resources if they are not well behaved and you forget about them). In general it's better to define a scope that is cancelled when you no longer need any of the coroutines that are run by it, so you can clean rogue coroutines.
That said, in some circumstances you do need to run coroutines "forever" (for the whole life of your application). In that case it's ok to use GlobalScope, or a custom application-wide scope if you need to customize things like the thread pool or exception handler. But in any case don't create a scope on the spot just to launch a coroutine without keeping a handle to it.
In your case, it seems you have no clear moment when you wouldn't care about the long running coroutines anymore, so you may be ok with the fact that your coroutines can live forever and are never cancelled. In that case, I would suggest a custom application-wide scope that you would wire in your components.
I am trying to execute a transaction on a Redis instance from within Spring Boot application written in Kotlin. I have followed the recommendation in Spring Doc on the best practice to achieve this.
I am struggling with the Kotlin implementation, however. Specifically, I don't know how to implement the Java generic interface with a generic method to make it work in the Kotlin code:
redisTemplate.execute(object : SessionCallback<List<String>> {
override fun <K : Any?, V : Any?> execute(operations: RedisOperations<K, V>): List<String>? {
operations.multi()
operations.opsForValue().set("key", "value")
return operations.exec()
}
})
The code above complains that the set method expects parameters with types K and V respectively but String is found instead.
Is there an elegant way how to inline the interface implementation in Kotlin without having to use unchecked casting or other convoluted approaches to make this work?
I think you're facing this problem due to poor interface definition for SessionCallback and the framework itself is doing unsafe casts themselves.
You see, if we take a look into the SessionCallback definition over here we can see that it looks as follows:
public interface SessionCallback<T> {
#Nullable
<K,V> T execute(RedisOperations<K,V> operations) throws DataAccessException
}
The generics K,V referring to the type of keys and values from your Redis are not parameters of the SessionCallback interface and that's why the kotlin compiler is having a hard time inferring the type of these: Because the execute function only takes a parameter of type SessionCallback<T> without passing the types of keys and values as parameters to that interface.
Your best-effort might be to provide a nice wrapper around that API using extension functions and inline generic types by doing some controlled unsafe casts.
Something like this might be enough:
inline fun <reified K : Any?, reified V: Any?, reified T> RedisTemplate<K, V>.execute(crossinline callback: (RedisOperations<K,V>) -> T?): T?{
val callback = object : SessionCallback<T> {
override fun <KK, VV> execute(operations: RedisOperations<KK,VV>) = callback(operations as RedisOperations<K, V>) as T?
}
return execute(callback)
}
Which then you can consume by doing:
fun doSomething(redisTemplate: RedisTemplate<String, String>) {
redisTemplate.execute { operations ->
operations.multi()
operations.opsForValue().set("key", "value")
operations.exec() as List<String>
}
}
And yes, you need to cast the .exec() result because nobody bothered using generics and returns a List<Object> as you can see on the official documentation
I've tried following code, but the task finishes before response arrives:
buildscript {
dependencies {
classpath("io.ktor:ktor-client-core:1.6.5")
classpath("io.ktor:ktor-client-cio:1.6.5")
}
}
tasks {
register("suspendCall") {
doLast {
kotlinx.coroutines.GlobalScope.launch {
val client = io.ktor.client.HttpClient()
val response = client.get<io.ktor.client.statement.HttpResponse>("https://ktor.io/")
println(response)
}
}
}
}
Is there a correct way to wait for async code to complete?
To my knowledge, to work asynchronously or in parallel, you will want to use the Worker API:
The Worker API provides the ability to break up the execution of a task action into discrete units of work and then to execute that work concurrently and asynchronously. This allows Gradle to fully utilize the resources available and complete builds faster.
As a result, you should not need to use a specific language asynchronous concept such as Kotlin's coroutines. Instead, you can use plain Kotlin or Java, and let Gradle handle the asynchronous bits.
Hi guys i'm trying to improve performance of some computation in my system. Basically I want to generate a series of actions based on some data. This doesn't scale well and I want to try doing this in parallel and getting a result after (a bit like how futures work)
I have an interface with a series of implementations that get a collection of actions. And want to call all these in parallel and await the results at the end.
The issue is that, when I view the logs its clearly doing this sequentially and waiting on each action getter before going to the next one. I thought the async would do this asynchronously, but its not.
The method the runBlocking is in, is within a spring transaction. Maybe that has something to do with it.
runBlocking {
val actions = actionsReportGetters.map { actionReportGetter ->
async {
getActions(actionReportGetter, abstractUser)
}
}.awaitAll().flatten()
allActions.addAll(actions)
}
private suspend fun getActions(actionReportGetter: ActionReportGetter, traderUser: TraderUser): List<Action> {
return actionReportGetter.getActions(traderUser)
}
interface ActionReportGetter {
fun getActions(traderUser: TraderUser): List<Action>
}
Looks like you are doing some blocking operation in ActionReportGetter.getActions in a single threaded environment (probably in the main thread).
For such IO operations you should launch your coroutines in Dispatchers.IO which provides a thread pool with multiple threads.
Update your code to this:
async(Dispatchers.IO) { // Switch to IO dispatcher
getActions(actionReportGetter, abstractUser
}
Also getActions need not be a suspending function here. You can remove the suspend modifier from it.
I'm working on a Spring Boot (2.2) project using Kotlin, with CouchDB as (reactive) database, and in consequence, async DAO (either suspend functions, or functions returning a Flow). I'm trying to setup WebFlux in order to have async controllers too (again, I want to return Flows, not Flux). But I'm having troubles retrieving my security context from ReactiveSecurityContextHolder.
From what I've read, unlike SecurityContextHolder which is using ThreadLocal to store it, ReactiveSecurityContextHolder relies on the fact that Spring, while making a subscription to my reactive chain, also stored that context inside this chain, thus allowing me to call ReactiveSecurityContextHolder.getContext() from within the chain.
The problem is that I have to transform my Mono<SecurityContext> into a Flow at some point, which makes me loose my SecurityContext. So my question is: is there a way to have a Spring Boot controller returning a Flow while retrieving the security context from ReactiveSecurityContextHolder inside my logic? Basically, after simplification, it should look like this:
#GetMapping
fun getArticles(): Flow<String> {
return ReactiveSecurityContextHolder.getContext().flux().asFlow() // returns nothing
}
Note that if I return the Flux directly (skipping the .asFlow()), or add a .single() or .toList() in the end (hence using a suspend fun), then it works fine and my security context is returned, but again that's not what I want. I guess the solution would be to transfer the context from the Flux (initial reactive chain from ReactiveSecurityContextHolder) to the Flow, but it doesn't seem to be done by default.
Edit: here is a sample project showcasing the problem: https://github.com/Simon3/webflux-kotlin-sample
What you really try to achieve is accessing your ReactorContext from inside a Flow.
One way to do this is to relax the need for returning a Flow and return a Flux instead. This allows you to recover the ReactorContext and pass it to the Flow you are going to use to generate your data.
#ExperimentalCoroutinesApi
#GetMapping("/flow")
fun flow(): Flux<Map<String, String>> = Mono.subscriberContext().flatMapMany { reactorCtx ->
flow {
val ctx = coroutineContext[ReactorContext.Key]?.context?.get<Mono<SecurityContext>>(SecurityContext::class.java)?.asFlow()?.single()
emit(mapOf("user" to ((ctx?.authentication?.principal as? User)?.username ?: "<NONE>")))
}.flowOn(reactorCtx.asCoroutineContext()).asFlux()
}
In the case when you need to access the ReactorContext from a suspend method, you can simply get it back from the coroutineContext with no further artifice:
#ExperimentalCoroutinesApi
#GetMapping("/suspend")
suspend fun suspend(): Map<String,String> {
val ctx = coroutineContext[ReactorContext.Key]?.context?.get<Mono<SecurityContext>>(SecurityContext::class.java)?.asFlow()?.single()
return mapOf("user" to ((ctx?.authentication?.principal as? User)?.username ?: "<NONE>"))
}