I have a simple series of chained operations that retrieve and persist some data using a Panache repository, running in a Quarkus service. Where these operations are parallelised a ContextNotActiveException is thrown. Where the parallelisation is removed, the code works as intended.
This code works:
dataRepository.get()
.map { convert(it) }
.forEach { perist(it) }
This code does not:
dataRepository.get()
.parallelStream()
.map { convert(it) }
.forEach { perist(it) }
The Quarkus documentation is pretty limited, only addressing use of mutiny or RX.
How can I propagate the context such that parallelStream() will work?
Unfortunately Context Propagation does not play well with parallel Java streams, because making a stream parallel automatically moves the execution to the ForkJoinPool, which means you lose the context. You'll need to handle the parallelism differently, without having the Java streams do it for you - you will probably want to use the org.eclipse.microprofile.context.ManagedExecutor.
Assuming that it's the convert method which, for whatever reason, requires an active request context, you will need to dispatch its invocation into the managed executor. This will make sure that the context is propagated.
In Java code, one close equivalent to your code that I can think of is this:
#Inject
org.eclipse.microprofile.context.ManagedExecutor executor;
(...)
dataRepository.streamAll()
.forEach(i -> {
executor.supplyAsync(() -> {
return convert(i);
}).thenAccept(persist(i));
});
Related
This may be a question about coroutines in general, but in my ktor server (netty engine, default configuration) application I perform serveral asyncronous calls to a database and api endpoint and want to make sure I am using coroutines efficiently. My question are as follows:
Is there a tool or method to work out if my code is using coroutines effectively, or do I just need to use curl to spam my endpoint and measure the performance of moving processes to another context e.g. compute?
I don't want to start moving tasks/jobs to another context 'just in case' but should I treat the default coroutine context in my Route.route() similar to the Android main thread and perform the minimum amount of work on it?
Here is an rough example of the code that I'm using:
fun Route.route() {
get("/") {
call.respondText(getRemoteText())
}
}
suspend fun getRemoteText() : String? {
return suspendCoroutine { cont ->
val document = 3rdPartyLibrary.get()
if (success) {
cont.resume(data)
} else {
cont.resume(null)
}
}
}
You could use something like Apache Jmeter, but writing a script and spamming your server with curl seems also a good option to me
Coroutines are pretty efficient when it comes to context/thread switching, and with Dispatchers.Default and Dispatchers.IO you'll get a thread-pool. There are a couple of documentations around this, but I think you can definitely leverage these Dispatchers for heavy operations
There are few tools for testing endpoints. Jmeter is good, there are also command line tools like wrk, wrk2 and siege.
Of course context switching costs. The coroutine in routing is safe to run blocking operations unless you have the option shareWorkGroup set. However, usually it's good to use a separate thread pool because you can control it's size (max threads number) to not get you database down.
I want to chain one mono after each flux event. The mono publisher will need information from each event published by the flux. The response should be a flux with data of the flux event and the mono response.
After digging, I end up with a map inside a flatMap. The code looks like this:
override fun searchPets(petSearch: PetSearch): Flux<Pet> {
return petRepository
.searchPets(petSearch) // returns Flux<pet>
.flatMap { pet ->
petService
.getCollarForMyPet() // returns Mono<collar>
.map { collar -> PetConverter.addCollarToPet(pet, collar) } //returns pet (now with with collar)
}
}
My main concerns are:
Is a code smell using a map inside a flatMap?
Will pet variable content suffer race conditions with multiple flux events coming, and also the mono events?
Is there any better way to approach this kind of behaviour?
This approach is perfectly fine.
The Reactive Streams specification mandates that onNext events don't overlap, so there won't be an issue with race conditions.
flatMap introduces concurrency though, so multiple calls to the PetService will run in parallel. This shouldn't be an issue, unless searchPets emits some instance of Pet twice.
Not that due to that concurrency, flatMap can kind of reorder pets in this scenario. Imagine the search returns petA then petB, but the petService call for petA takes longer. In the ouptut of the flatMap, petB would be emitted first (with its collar set), then petA.
I want to stream result objects captured by Spring JDBC RowCallbackHandler using via a Kotlin Sequence.
The code looks basically like this:
fun findManyObjects(): Sequence<Thing> = sequence {
val rowHandler = object : RowCallbackHandler {
override fun processRow(resultSet: ResultSet) {
val thing = // create from resultSet
yield(thing) // ERROR! No coroutine scope
}
}
jdbcTemplate.query("select * from ...", rowHandler)
}
But I get the compilation error:
Suspension functions can be called only within coroutine body.
However, exactly this "coroutine body" should exist, because the whole block is wrapped in a sequence builder. But it doesn't seem to work with a nested object.
Minimal example to show that it doesn't compile with a nested object:
// compiles
sequence {
yield(1)
}
// doesn't compile
sequence {
object {
fun doit() {
yield(1) // Suspension functions can be called only within coroutine body.
}
}
}
How can I pass an object from the ResultSet into the Sequence?
Use Flow for asynchronous data streams
The reason you can't call yield inside your RowCallbackHandler object is twofold.
The processRow function isn't a suspending function (and can't be, because it's declared in and called by Java). A suspending function like yield can only be called by another suspending function.
A sequence always ends when the sequence { ... } builder returns. Even if you and I know that the query method will invoke the RowCallbackHandler before returning from the sequence, the Kotlin compiler has no way of knowing that. Yielding sequence values from functions and objects other than the body of the sequence itself is never allowed, because there's no way of knowing where or when they will run.
To solve this problem, we need to introduce a different kind of coroutine: one that can suspend itself while it waits for the RowCallbackHandler to be invoked.
Unfortunately, because we're talking about JDBC here, there may not be much to gain by introducing full-blown coroutines. Under the hood, calls to the database will always be made in a blocking way, removing a lot of the benefit. It might well be simpler not to try and 'stream' results, and just iterate over them in a boring, old-fashioned way. But let's explore the possibilities all the same.
The problem with sequences
Sequences are designed for on-demand computation, and are not asynchronous. They can't wait for other asynchronous operations, such as callbacks. The sequence builder's yield function simply suspends while waiting for the caller to retrieve the next item, and it's the only suspending function a sequence is ever allowed to call. You can demonstrate this if you try to use a simple suspending call like delay inside a sequence. You'll get a compile error letting you know that you're operating in a restricted coroutine scope.
sequence<String> { delay(1000) } // doesn't compile
Without the ability to call suspending functions, there's no way to wait for a callback to be invoked. Recognising this limitation, Kotlin provides an alternative mechanism for streams of on-demand values that do provide data in an asynchronous way. It's called a Flow.
Callback flows
The mechanism for using Flows to provide values from a callback interface is described very nicely by Roman Elizarov in his Medium article Callbacks and Kotlin Flows.
If you did want to use a callback flow, you'd simply replace sequence with callbackFlow, and replace yield with sendBlocking.
Your code might look something like this:
fun findManyObjects(): Flow<Thing> = callbackFlow {
val rowHandler = object : RowCallbackHandler {
override fun processRow(resultSet: ResultSet) {
val thing = // create from resultSet
sendBlocking(thing)
}
}
jdbcTemplate.query("select * from ...", rowHandler)
close() // the query is finished, so there are no more rows
}
A simpler flow
While that's the idiomatic way to stream values provided by a callback, it might not be the simplest approach to this problem. By avoiding callbacks altogether, you can use the much more common flow builder, passing each value to its emit function. But now that you have asynchrony in the form of coroutines, you can't just return a flow and then allow Spring to immediately close the result set. You need to be able to delay the closing of the result set until the flow has actually been consumed. That means peeling back the abstractions provided by RowCallbackHandler or ResultSetExtractor, which expect to process all the results in a blocking way, and instead providing your own implementation.
fun Connection.findManyObjects(): Flow<Thing> = flow {
prepareStatement("select * from ...").use { statement ->
statement.executeQuery().use { resultSet ->
while (resultSet.next()) {
val thing = // create from resultSet
emit(thing)
}
}
}
}
Note the use blocks, which will deal with closing the statement and result set. Because we don't reach the end of the use blocks until the while loop has completed and all the values have been emitted, the flow is free to suspend while the result set remains open.
So why use a flow at all?
You might notice that if you do it this way, you can actually replace flow and emit with sequence and yield. So have we come full circle? Well, sort of. The difference is that a flow can only be consumed from a coroutine, whereas with sequence, you can iterate over the resulting values without suspending at all. In this particular case, it's a hard call to make, because JDBC operations are always blocking.
If you use a sequence, the calling thread will block as it waits to receive the data. Values in a sequence are always computed by the thing consuming the sequence, so if the sequence invokes a blocking function, the consumer's thread will block waiting for the value. In a non-coroutine application, that might be okay, but if you're using coroutines, you really want to avoid hiding blocking calls inside innocuous-looking sequences.
If you use a flow, you can at least isolate the blocking calls by having the flow run on a particular dispatcher. For example, you could use the built-in IO dispatcher to perform the JDBC call, then switch back to the default dispatcher for any further processing. If you definitely want to stream values, I think this is a better approach than using a sequence.
With all this in mind, you'll need to be careful with your use of coroutines and dispatchers if you do choose one of these solutions. If you'd rather not worry about that, there's nothing wrong with using a regular ResultSetExtractor and forgetting about both sequences and flows for now.
I am using the V8 JavaScript engine in my app. It enforces that all operations executed on the same thread that it was initialised on.
I have introduces a CoroutineContext out of a HandlerThread using asCoroutineDispatcher() extension
Using that context, for every function I do:
suspend fun executeRandomCode() = withContext(jsContext) {
// run desired V8 code
}
So my question is, if i run the following code:
ioScope.launch {
executeRandomCode()
executeRandomCode2()
executeRandomCode3()
}
Is there any overhead of doing multiple function calls where each function does withContext(jsContext)?
Is there a better way to enforce single thread using coroutines?
Given an Akka.net-based actor system with some basic structure like:
/user
/coordinator
/child (x1000, with RoundRobinPool router)
Coordinator actor defines supervision strategy with Directive.Restart used.
Child actors could fail for several reasons (for example, with ArithmeticException, InvalidOperationException and MyCustomException).
But when a child fails with MyCustomException, I'd like to have an ability to somehow additionally handle it without changing the default supervision mechanism (restart approach should still work here).
For example, to add Console.Writeline with exception details.
How do I implement it?
In general MyCustomException signals, that you're in charge when the exception occurs, and you could log it right away in your child logic, without need to elevating it to parent. But if it's not possible you can define your own supervisor strategy class like this:
public class MySupervisorStrategy : OneForOneStrategy
{
public MySupervisorStrategy(ILoggingAdapter log) : base(reason =>
{
if (reason is MyCustomException)
{
log.Error(reason.Message);
return Directive.Restart;
}
return Akka.Actor.SupervisorStrategy.DefaultDecider.Decide(reason);
})
{
}
}
There are two ways how to apply it to your actor:
Use Props.Create<MyActor>().WithSupervisorStrategy(new MySupervisorStrategy(system.Log) to apply it directly from your actor system.
Attach it directly in actor's logic by overriding SupervisorStrategy method of an actor (use Context.GetLogger() to receive log instance for current actor).
Second option is less flexible but will probably work better in situations where you need to use remote deployment scenarios.