kotlin coroutines - withContext vs suspendCoroutine - kotlin-coroutines

i was wondering since both withContext and suspendCoroutine are suspend functions is there any difference betweeen them other then the fact that suspendCoroutine offers a continuation so you can control when it resumes or cancels if we use suspendCancellableCoroutine variation.
I would say both can be used to stop making callbacks (which is one advantage of coroutines). is there any major difference ?

Actually only suspendCoroutine can be used to translate a callback-based API into coroutines. withContext doesn't have the effect of suspending a coroutine, but temporarily changing its context (this is mainly about changing the dispatcher). The coroutine immediately continues in the other context and then comes back to the caller's context.

Related

When to use Unconfined in Kotlin

When would I choose to use Dispatchers.Unconfined? Is when it doesn't really matter where the coroutine should run? So you let the coroutine to choose the thread pool as it better suits?
And how does it differ from Dispatchers.Default? Is it that when running the Default dispatcher is always within a specific thread pool defined as the default one?
So you let the coroutine to choose the thread pool as it better suits?
That's not really how Unconfined works. The best way to understand it is that it is a "no-op" dispatcher that doesn't actually do any dispatch at all. Wherever you call continuation.resume(), that's where the coroutine resumes execution — within that very call. When the resume() call returns, it means the coroutine has either suspended again or completed.
In normal programming, you usually call continuation.resume() from a callback and it is not your code that runs the callback, so you don't actually have any control over the thread where your coroutine will resume. It is not advisable to use the Unconfined dispatcher when resuming from a callback provided by a library that is not under your control.
Unconfined is really a special-cased tool you can use when building a coroutine execution environment yourself, or in other custom scenarios. Basically, you should use it only when you are actively looking for a way to disable the normal dispatching mechanism.
The unconfined dispatcher is appropriate for coroutines which neither consume CPU time nor update any shared data (like UI) confined to a specific thread.
So, I'd use it in non-IO, UI or computation heavy situations basically :D.
I think the nunmber of use-cases for this is pretty low, but I'd think of an operation which isn't heavy, but still for some reason you'd like it to run on a different thread.
Here's a link for how it actually works.
Dispatchers.Default is really different, and it's mostly used for heavy CPU operations.
This is because, it actually dispatches works to a thread pool with a number of threads equal to the number of CPU cores, and it's at least 2. This way developers can leverage the full capacity of the cpu when doing heavy computational work.

When should I use `Dispatchers.Unconfined` vs `EmptyCoroutineContext`?

When is it appropriate to use Dispatchers.Unconfined vs EmptyCoroutineContext?
My use case is that I want to create an API for intercepting network calls. I want to provide an optional parameter to control what dispatcher the interception is executed on. For the default value of this parameter, should it be Dispatchers.Unconfined or EmptyCoroutineContext?
For the default value of this parameter, should it be Dispatchers.Unconfined or EmptyCoroutineContext?
Most of the time it is Dispatchers.Unconfined.
EmptyCoroutineContext has no elements in it, semantically it is a null object. Coroutine builders, such as launch, specify their behaviour for that case: If the context does not have any dispatcher nor any other ContinuationInterceptor, then Dispatchers.Default is used. Most of the time you should not use EmptyCoroutineContext as you don't use nulls or null objects.
Dispatchers.Unconfined is different: it executes coroutine immediately on the current thread and later resumes it in whatever thread called resume.
It is usually a good fit for things like intercepting regular non-suspending API or invoking coroutine-related code from blocking world callbacks.

When to use loop.add_signal_handler?

I noticed the asyncio library has a loop.add_signal_handler(signum, callback, *args) method.
So far I have just been catching unix signals in the main file using the signals module in with my asynchronous code like this:
signal.signal(signal.SIGHUP, callback)
async def main():
...
Is that an oversight on my part?
The add_signal_handler documentation is sparse1, but looking at the source, it appears that the main added value compared to signal.signal is that add_signal_handler will ensure that the signal wakes up the event loop and allow the loop to invoke the signal handler along with other queued callbacks and runnable coroutines.
So far I have just been catching unix signals in the main file using the signals module [...] Is that an oversight on my part?
That depends on what the signal handler is doing. Printing a message or updating a global is fine, but if it is invoking anything in any way related to asyncio, it's most likely an oversight. A signal can be delivered at (almost) any time, including during execution of an asyncio callback, a coroutine, or even during asyncio's own bookkeeping.
For example, the implementation of asyncio.Queue freely assumes that the access to the queue is single-threaded and non-reentrant. A signal handler adding something to a queue using q.put_nowait() would be disastrous if it interrupted an on-going invocation of q.put_nowait() on the same queue. Similar to typical race conditions experienced in multi-threaded code, an interruption in the middle of assignment to _unfinished_tasks might well cause it to get incremented only once instead of twice (once for each put_nowait).
Asyncio code is designed for cooperative multi-tasking, where the points where a function may suspend defined are clearly denoted by the await and related keywords. The add_signal_handler function ensures that your signal handler gets invoked at such a point, and that you're free to implement it as you'd implement any other asyncio callback.
1 When this answer was originally written, the add_signal_handler documentation was briefer than today and didn't cover the difference to signal.signal at all. This question prompted it getting expanded in the meantime.

NSUrlConnection synchronous request without accepting redirects

I am currently implementing code that uses macOS API for HTTP/HTTPs requests in a Delphi/Lazarus program.
The code runs in its own thread (i.e. not main/ui thread) and is part of a larger threading based crawler across Windows/Mac and Delphi/Lazarus. I try to implement the actual HTTP/S request part using the OS API - but handle e.g. processing and taking action upon HTTP headers myself.
This means I would like to keep using synchronous mode if possible.
I want the request to simply return to me what the server returns.
I do not want it to follow redirects.
I currently use sendSynchroniousRequest_returningResponse_error
I have tried searching Google, but it seems there is no way when using synchronous requests? That just seems a bit odd.
No, NSURLConnection's synchronous functionality is very limited, and was never expanded because it is so strongly discouraged. That said, it is technically possible to implement what you're trying to do.
My recollection, from having replaced that method with an NSURLSession equivalent once (to swizzle in a less leaky replacement for that method in a binary-only library), is that you need to basically write a method that uses a shared dictionary to store a semaphore for each NSURLSessionDataTask (using the data task as a key). Then, you set the semaphore's count to zero so that it will block immediately when you wait on it, asynchronously start an asynchronous request on the main thread, and then wait on the semaphore (in the current thread). In the asynchronous data task's completion handler block, you increment the semaphore, thus unblocking the calling thread.
The trick is to ensure that the session runs its callbacks on a thread OTHER than the current one (which is blocked waiting for the semaphore). So you'll need to dispatch_async into the main thread when you actually start the data task.
Ostensibly, if you supported converting the task into a download task or stream task in the relevant delegate method, you would also need to take appropriate action to update the shared dictionary as well, but I'm assuming you won't use that feature. :-)

Waiting for task throws

I'm just starting out with WinRT's concurrency model. I have a task that I need to wait on, but calling wait() throws an exception that I can't catch.
Simplest possible code:
concurrency::task<StorageFile^> getFileTask = concurrency::create_task(Windows::Storage::ApplicationData::Current->LocalFolder->GetFileAsync(fileString));
getFileTask.wait();
The exception it throws is:
Microsoft C++ exception: Concurrency::invalid_operation at memory location 0x0402C45C
How do I set this up so that it works?
One of the most important rules that you must follow when building a Windows Store app is that you must never block the UI thread. Never ever.
If you start an asynchronous operation, you get a future or task that "owns" that operation. If you call get() or wait() on that operation before the asynchronous operation has completed, that call will block until the operation completes, then it will return the result.
Since blocking the UI thread is bad, if you attempt to synchronize with a not-yet-completed asynchronous operation on the UI thread, the call to get() or wait() will throw, to prevent the UI thread from being blocked. This exception is there to help you to remember not to block the UI thread. :-)
You should use task's then() to provide a continuation that will run when the asynchronous operation completes. If the continuation needs to run on the UI thread as well, be sure to pass task_continuation_context::use_current() to then() to ensure that the continuation is marshaled back to the UI thread for execution.
Note: This exception is only thrown if you are using C++/CX. If you are using C++ without the C++/CX language extensions, the call to get() or wait() will successfully block, potentially resulting in a poor user experience. In general, C++/CX has many more "guard rail" features like this one that are designed to make it easier for you to write good code. When using C++/CX, you get the full power of C++, with the understanding that there are more opportunities for error.

Resources