After reading the official docs on coroutine cancelation, If for the example I have the following code:
val job = scope.launch {
val userId = networkOperationOne()
//check if coroutine is still active before calling operation two?
val userDetails = networkOperationTwo(userId)
}
Should I check isActive before calling network call two?
Let's assume that job.cancel() was called while networkOperationOne() is still in progress and that I'm not calling any suspending function that automatically does the cancelation for me.
It depends on how networkOperationOne and networkOperationTwo are suspending.
They may internally be cooperative anyway, which means you do not have to check isActive.
When in doubt, throw in ensureActive() to perform the check and act accordingly.
In this case, the conditional check is negligible compared to the network request so add one in.
Related
We have a button in the UI, which, when pressed, will make some remote network call in its own coroutine. However, if the user spams the button for whatever reason, it is possible that the remote data might somehow get corrupted. We would like to prevent this by discarding all requests until the current one is completed.
There are many ways to do this. I have create a simple extension function on CoroutineScope to only launch if the CoroutineScope is not active. This is what I have created:
Extension Function
fun CoroutineScope.safeLaunch(dispatcher: CoroutineDispatcher, block: () -> Unit): Job {
return if (!isActive) {
launch(dispatcher) {
block()
}
} else {
launch {}
}
}
Example Use
fun loadNotifications() {
viewModelScope.safeLaunch(IO) {
getNotifications.invoke() // Suspend function invoke should only be from a coroutine or another suspend function
}
}
The problem is, the above won't compile as I get an error saying
Suspend function invoke should only be from a coroutine or another
suspend function
Does anyone know what I'm doing wrong or how to make it work?
There are multiple problems with this code:
Fixing the error you mentioned is very easy and requires to only specify block as suspendable: block: suspend () -> Unit.
isActive doesn't mean the job/scope is actively running something, but that it hasn't finished. isActive in your example always returns true, even before launching any coroutine on it.
If your server can't handle concurrent actions, then you should really fix this on server side. Limiting the client isn't a proper fix as it can be still exploited by users. Also, you need to remember that multiple clients can perform the same action at the same time.
As you mentioned, there are several ways how this situation can be handled on the client side:
In the case of UI and the button, it is probably the best for the user experience to disable the button or overlay the screen/button with a loading indicator. It gives the user the feedback that the operation is running in the background and at the same time it fixes the problem with multiple calls to the server.
In general case, if we just need to limit concurrency and reject any additional tasks while the last one is still running, probably the easiest is to use Mutex:
private val scope = CoroutineScope(EmptyCoroutineContext)
private val mutex = Mutex()
fun safeLaunch(block: suspend () -> Unit) {
if (!mutex.tryLock()) {
return
}
scope.launch {
try {
block()
} finally {
mutex.unlock()
}
}
}
Note we need a separate mutex per scope or per the type of the task. I don't think it is possible to create such utility as a generic extension function, working with any coroutine scope. Actually, we can implement it in a very similar way to your original code, but by looking at the current job's children. Still, I consider such solution hacking and I discourage it.
I want to stream result objects captured by Spring JDBC RowCallbackHandler using via a Kotlin Sequence.
The code looks basically like this:
fun findManyObjects(): Sequence<Thing> = sequence {
val rowHandler = object : RowCallbackHandler {
override fun processRow(resultSet: ResultSet) {
val thing = // create from resultSet
yield(thing) // ERROR! No coroutine scope
}
}
jdbcTemplate.query("select * from ...", rowHandler)
}
But I get the compilation error:
Suspension functions can be called only within coroutine body.
However, exactly this "coroutine body" should exist, because the whole block is wrapped in a sequence builder. But it doesn't seem to work with a nested object.
Minimal example to show that it doesn't compile with a nested object:
// compiles
sequence {
yield(1)
}
// doesn't compile
sequence {
object {
fun doit() {
yield(1) // Suspension functions can be called only within coroutine body.
}
}
}
How can I pass an object from the ResultSet into the Sequence?
Use Flow for asynchronous data streams
The reason you can't call yield inside your RowCallbackHandler object is twofold.
The processRow function isn't a suspending function (and can't be, because it's declared in and called by Java). A suspending function like yield can only be called by another suspending function.
A sequence always ends when the sequence { ... } builder returns. Even if you and I know that the query method will invoke the RowCallbackHandler before returning from the sequence, the Kotlin compiler has no way of knowing that. Yielding sequence values from functions and objects other than the body of the sequence itself is never allowed, because there's no way of knowing where or when they will run.
To solve this problem, we need to introduce a different kind of coroutine: one that can suspend itself while it waits for the RowCallbackHandler to be invoked.
Unfortunately, because we're talking about JDBC here, there may not be much to gain by introducing full-blown coroutines. Under the hood, calls to the database will always be made in a blocking way, removing a lot of the benefit. It might well be simpler not to try and 'stream' results, and just iterate over them in a boring, old-fashioned way. But let's explore the possibilities all the same.
The problem with sequences
Sequences are designed for on-demand computation, and are not asynchronous. They can't wait for other asynchronous operations, such as callbacks. The sequence builder's yield function simply suspends while waiting for the caller to retrieve the next item, and it's the only suspending function a sequence is ever allowed to call. You can demonstrate this if you try to use a simple suspending call like delay inside a sequence. You'll get a compile error letting you know that you're operating in a restricted coroutine scope.
sequence<String> { delay(1000) } // doesn't compile
Without the ability to call suspending functions, there's no way to wait for a callback to be invoked. Recognising this limitation, Kotlin provides an alternative mechanism for streams of on-demand values that do provide data in an asynchronous way. It's called a Flow.
Callback flows
The mechanism for using Flows to provide values from a callback interface is described very nicely by Roman Elizarov in his Medium article Callbacks and Kotlin Flows.
If you did want to use a callback flow, you'd simply replace sequence with callbackFlow, and replace yield with sendBlocking.
Your code might look something like this:
fun findManyObjects(): Flow<Thing> = callbackFlow {
val rowHandler = object : RowCallbackHandler {
override fun processRow(resultSet: ResultSet) {
val thing = // create from resultSet
sendBlocking(thing)
}
}
jdbcTemplate.query("select * from ...", rowHandler)
close() // the query is finished, so there are no more rows
}
A simpler flow
While that's the idiomatic way to stream values provided by a callback, it might not be the simplest approach to this problem. By avoiding callbacks altogether, you can use the much more common flow builder, passing each value to its emit function. But now that you have asynchrony in the form of coroutines, you can't just return a flow and then allow Spring to immediately close the result set. You need to be able to delay the closing of the result set until the flow has actually been consumed. That means peeling back the abstractions provided by RowCallbackHandler or ResultSetExtractor, which expect to process all the results in a blocking way, and instead providing your own implementation.
fun Connection.findManyObjects(): Flow<Thing> = flow {
prepareStatement("select * from ...").use { statement ->
statement.executeQuery().use { resultSet ->
while (resultSet.next()) {
val thing = // create from resultSet
emit(thing)
}
}
}
}
Note the use blocks, which will deal with closing the statement and result set. Because we don't reach the end of the use blocks until the while loop has completed and all the values have been emitted, the flow is free to suspend while the result set remains open.
So why use a flow at all?
You might notice that if you do it this way, you can actually replace flow and emit with sequence and yield. So have we come full circle? Well, sort of. The difference is that a flow can only be consumed from a coroutine, whereas with sequence, you can iterate over the resulting values without suspending at all. In this particular case, it's a hard call to make, because JDBC operations are always blocking.
If you use a sequence, the calling thread will block as it waits to receive the data. Values in a sequence are always computed by the thing consuming the sequence, so if the sequence invokes a blocking function, the consumer's thread will block waiting for the value. In a non-coroutine application, that might be okay, but if you're using coroutines, you really want to avoid hiding blocking calls inside innocuous-looking sequences.
If you use a flow, you can at least isolate the blocking calls by having the flow run on a particular dispatcher. For example, you could use the built-in IO dispatcher to perform the JDBC call, then switch back to the default dispatcher for any further processing. If you definitely want to stream values, I think this is a better approach than using a sequence.
With all this in mind, you'll need to be careful with your use of coroutines and dispatchers if you do choose one of these solutions. If you'd rather not worry about that, there's nothing wrong with using a regular ResultSetExtractor and forgetting about both sequences and flows for now.
I got following functions for making server calls
suspend fun <T: BaseResponse> processPost(post:Post):T? {
val gson=Gson()
val data=gson.toJson(post.reqData)
val res= sendPost(data,post.script)
Log.d("server","res:"+res.first)
//process response here
return null
}
private fun sendPost(data:String,url:String):Pair<String,Int> {
//send data to server
}
In some cases processPost may enter into infinite loop(for instance to wait for access token refresh).Of course this code should never be run on the main thread.But when I mark this function as suspend IDE is highliting it as redundant.Its not big deal but I'm curious how then can I restrict function execution on the main thread?
It seems that you have quite some learning on coroutines to do. It’s impossible to cover all you need to know in one single answer. That’s what tutorials are for. Anyway I will try to answer just the points you asked. It may not make sense before you learn the concepts, I’m sorry if my answer does not help.
Just like many other things, coroutines are not magic. If you don’t understand what something does, you cannot hope it has the properties you want. It may sound harsh but I want to stress that such mentality is a major cause of bugs.
Making a function suspending allows you to call other suspending functions in the function body. It does not make blocking calls non-blocking, nor does it automatically jump threads for you.
You can use withContext to have the execution jump to another thread.
suspend fun xyz() = withContext(Dispatchers.IO) {
...
}
When you call xyz in the main thread, it’ll hand the task to the IO dispatcher. Without being blocked, it can then handle other stuff in the app.
EDIT regarding the comment.
Sorry for being so patronizing and making a wrong guess about your misconception.
If you just want the compiler/the IDE to shut up about the warning, you can simply add #Suppress("RedundantSuspendModifier") to the function. But you shouldn't, because the compiler knows better than you, at least for now.
The great thing about coroutines is that you can write in direct style without blocking the main thread.
launch(Dispatchers.Main) {
val result = makeAnHttpCall() // this can take a long time
messWithUi(result) // changes to the UI has to be in the main thread
}
I hope it is obvious by now that the suspend modifier is not going to stop the main thread from calling the function.
#Suppress("RedundantSuspendModifier")
suspend fun someHeavyComputation(): Result {
return ...
}
launch(Dispatchers.Main) {
val result = someHeavyComputation() // this will run in the main thread
messWithUi(result)
}
Now if you want the computation not to be done in the main thread:
suspend fun someHeavyComputation() = withContext(Dispatchers.Default) {
... // this will be in a thread pool
}
Further reading: Blocking threads, suspending coroutines.
what is the proper implementation of SendAsync method of Azure ServiceBus TopicClient?
In the second implementation, will the BrokeredMessage actually be disposed before the SendAsync happens?
public async Task SendAsync<TMessage>(TMessage message, IDictionary<string, object> properties = null)
{
using (var bm = MessagingHelper.CreateBrokeredMessage(message, properties))
{
await this._topicClient.Value.SendAsync(bm);
}
}
public Task SendAsync<TMessage>(TMessage message, IDictionary<string, object> properties = null)
{
using (var bm = MessagingHelper.CreateBrokeredMessage(message, properties))
{
return this._topicClient.Value.SendAsync(bm);
}
}
I would like to get most from await/async pattern.
Answer to your question: the second approach could cause issues with disposed objects, you have to wait ending of SendAsync execution before you can release resources.
Detailed explanation.
If you call await, execution of a method will be stopped at the same moment and will not continue till awaitable method is not returned. Brokered message will be stored in a local hidden variable and will not be disposed.
If you don't call await, execution will continue and all resources of brokered message will be freed before they are actually consumed (as using is calling Dispose on object at the end) or in the process of consumption. This definetely will lead to exceptions inside SendAsync. At this point, execution of SendAsync is actually started.
What await does is “pausing” any current thread and waits for completion of task and it's result. And that's what you actually need. Purpose of async-await is to allow execution of some task concurrently with something else, it provides ability to wait for a result of concurrent operation when it is really necessary and further execution isn't possible without it.
First approach is good if every method to the top is an async method too. I mean, if caller of your SendAsync is async Task, and caller of that caller and so on to the top calling method.
Also, consider exceptions that could raise, they are listed here. As you can see, there are so-called transient errors. This is a kind of errors that retry can possibly fix. In your code, there is no such exception handling. Example of retry pattern could be found here, but mentioned article on exceptions can suggest better solutions and it is a topic of another question. I would also add some logging system to at least be aware of any non transient exceptions.
I've an actor where I want to store my mutable state inside a map.
Clients can send Get(key:String) and Put(key:String,value:String) messages to this actor.
I'm considering the following options.
Don't use futures inside the Actor's receive method. In this may have a negative impact on both latency as well as throughput in case I've a large number of gets/puts because all operations will be performed in order.
Use java.util.concurrent.ConcurrentHashMap and then invoke the gets and puts inside a Future.
Given that java.util.concurrent.ConcurrentHashMap is thread-safe and providers finer level of granularity, I was wondering if it is still a problem to close over the concurrentHashMap inside a Future created for each put and get.
I'm aware of the fact that it's a really bad idea to close over mutable state inside a Future inside an Actor but I'm still interested to know if in this particular case it is correct or not?
In general, java.util.concurrent.ConcurrentHashMap is made for concurrent use. As long as you don't try to transport the closure to another machine, and you think through the implications of it being used concurrently (e.g. if you read a value, use a function to modify it, and then put it back, do you want to use the replace(key, oldValue, newValue) method to make sure it hasn't changed while you were doing the processing?), it should be fine in Futures.
May be a little late, but still, in the book Reactive Web Applications, the author has indicated an indirection to this specific problem, using pipeTo as below.
def receive = {
case ComputeReach(tweetId) =>
fetchRetweets(tweetId, sender()) pipeTo self
case fetchedRetweets: FetchedRetweets =>
followerCountsByRetweet += fetchedRetweets -> List.empty
fetchedRetweets.retweets.foreach { rt =>
userFollowersCounter ! FetchFollowerCount(
fetchedRetweets.tweetId, rt.user
)
}
...
}
where followerCountsByRetweet is a mutable state of the actor. The result of fetchRetweets() which is a Future is piped to the same actor as a FetchedRetweets message, which then acts on the message on to modify the state of the acto., this will mitigate any concurrent operation on the state