What are the performance implications of .await on a Ready future? [duplicate] - performance

In a language like C#, giving this code (I am not using the await keyword on purpose):
async Task Foo()
{
var task = LongRunningOperationAsync();
// Some other non-related operation
AnotherOperation();
result = task.Result;
}
In the first line, the long operation is run in another thread, and a Task is returned (that is a future). You can then do another operation that will run in parallel of the first one, and at the end, you can wait for the operation to be finished. I think that it is also the behavior of async/await in Python, JavaScript, etc.
On the other hand, in Rust, I read in the RFC that:
A fundamental difference between Rust's futures and those from other languages is that Rust's futures do not do anything unless polled. The whole system is built around this: for example, cancellation is dropping the future for precisely this reason. In contrast, in other languages, calling an async fn spins up a future that starts executing immediately.
In this situation, what is the purpose of async/await in Rust? Seeing other languages, this notation is a convenient way to run parallel operations, but I cannot see how it works in Rust if the calling of an async function does not run anything.

You are conflating a few concepts.
Concurrency is not parallelism, and async and await are tools for concurrency, which may sometimes mean they are also tools for parallelism.
Additionally, whether a future is immediately polled or not is orthogonal to the syntax chosen.
async / await
The keywords async and await exist to make creating and interacting with asynchronous code easier to read and look more like "normal" synchronous code. This is true in all of the languages that have such keywords, as far as I am aware.
Simpler code
This is code that creates a future that adds two numbers when polled
before
fn long_running_operation(a: u8, b: u8) -> impl Future<Output = u8> {
struct Value(u8, u8);
impl Future for Value {
type Output = u8;
fn poll(self: Pin<&mut Self>, _ctx: &mut Context) -> Poll<Self::Output> {
Poll::Ready(self.0 + self.1)
}
}
Value(a, b)
}
after
async fn long_running_operation(a: u8, b: u8) -> u8 {
a + b
}
Note that the "before" code is basically the implementation of today's poll_fn function
See also Peter Hall's answer about how keeping track of many variables can be made nicer.
References
One of the potentially surprising things about async/await is that it enables a specific pattern that wasn't possible before: using references in futures. Here's some code that fills up a buffer with a value in an asynchronous manner:
before
use std::io;
fn fill_up<'a>(buf: &'a mut [u8]) -> impl Future<Output = io::Result<usize>> + 'a {
futures::future::lazy(move |_| {
for b in buf.iter_mut() { *b = 42 }
Ok(buf.len())
})
}
fn foo() -> impl Future<Output = Vec<u8>> {
let mut data = vec![0; 8];
fill_up(&mut data).map(|_| data)
}
This fails to compile:
error[E0597]: `data` does not live long enough
--> src/main.rs:33:17
|
33 | fill_up_old(&mut data).map(|_| data)
| ^^^^^^^^^ borrowed value does not live long enough
34 | }
| - `data` dropped here while still borrowed
|
= note: borrowed value must be valid for the static lifetime...
error[E0505]: cannot move out of `data` because it is borrowed
--> src/main.rs:33:32
|
33 | fill_up_old(&mut data).map(|_| data)
| --------- ^^^ ---- move occurs due to use in closure
| | |
| | move out of `data` occurs here
| borrow of `data` occurs here
|
= note: borrowed value must be valid for the static lifetime...
after
use std::io;
async fn fill_up(buf: &mut [u8]) -> io::Result<usize> {
for b in buf.iter_mut() { *b = 42 }
Ok(buf.len())
}
async fn foo() -> Vec<u8> {
let mut data = vec![0; 8];
fill_up(&mut data).await.expect("IO failed");
data
}
This works!
Calling an async function does not run anything
The implementation and design of a Future and the entire system around futures, on the other hand, is unrelated to the keywords async and await. Indeed, Rust has a thriving asynchronous ecosystem (such as with Tokio) before the async / await keywords ever existed. The same was true for JavaScript.
Why aren't Futures polled immediately on creation?
For the most authoritative answer, check out this comment from withoutboats on the RFC pull request:
A fundamental difference between Rust's futures and those from other
languages is that Rust's futures do not do anything unless polled. The
whole system is built around this: for example, cancellation is
dropping the future for precisely this reason. In contrast, in other
languages, calling an async fn spins up a future that starts executing
immediately.
A point about this is that async & await in Rust are not inherently
concurrent constructions. If you have a program that only uses async &
await and no concurrency primitives, the code in your program will
execute in a defined, statically known, linear order. Obviously, most
programs will use some kind of concurrency to schedule multiple,
concurrent tasks on the event loop, but they don't have to. What this
means is that you can - trivially - locally guarantee the ordering of
certain events, even if there is nonblocking IO performed in between
them that you want to be asynchronous with some larger set of nonlocal
events (e.g. you can strictly control ordering of events inside of a
request handler, while being concurrent with many other request
handlers, even on two sides of an await point).
This property gives Rust's async/await syntax the kind of local
reasoning & low-level control that makes Rust what it is. Running up
to the first await point would not inherently violate that - you'd
still know when the code executed, it would just execute in two
different places depending on whether it came before or after an
await. However, I think the decision made by other languages to start
executing immediately largely stems from their systems which
immediately schedule a task concurrently when you call an async fn
(for example, that's the impression of the underlying problem I got
from the Dart 2.0 document).
Some of the Dart 2.0 background is covered by this discussion from munificent:
Hi, I'm on the Dart team. Dart's async/await was designed mainly by
Erik Meijer, who also worked on async/await for C#. In C#, async/await
is synchronous to the first await. For Dart, Erik and others felt that
C#'s model was too confusing and instead specified that an async
function always yields once before executing any code.
At the time, I and another on my team were tasked with being the
guinea pigs to try out the new in-progress syntax and semantics in our
package manager. Based on that experience, we felt async functions
should run synchronously to the first await. Our arguments were
mostly:
Always yielding once incurs a performance penalty for no good reason. In most cases, this doesn't matter, but in some it really
does. Even in cases where you can live with it, it's a drag to bleed a
little perf everywhere.
Always yielding means certain patterns cannot be implemented using async/await. In particular, it's really common to have code like
(pseudo-code here):
getThingFromNetwork():
if (downloadAlreadyInProgress):
return cachedFuture
cachedFuture = startDownload()
return cachedFuture
In other words, you have an async operation that you can call multiple times before it completes. Later calls use the same
previously-created pending future. You want to ensure you don't start
the operation multiple times. That means you need to synchronously
check the cache before starting the operation.
If async functions are async from the start, the above function can't use async/await.
We pleaded our case, but ultimately the language designers stuck with
async-from-the-top. This was several years ago.
That turned out to be the wrong call. The performance cost is real
enough that many users developed a mindset that "async functions are
slow" and started avoiding using it even in cases where the perf hit
was affordable. Worse, we see nasty concurrency bugs where people
think they can do some synchronous work at the top of a function and
are dismayed to discover they've created race conditions. Overall, it
seems users do not naturally assume an async function yields before
executing any code.
So, for Dart 2, we are now taking the very painful breaking change to
change async functions to be synchronous to the first await and
migrating all of our existing code through that transition. I'm glad
we're making the change, but I really wish we'd done the right thing
on day one.
I don't know if Rust's ownership and performance model place different
constraints on you where being async from the top really is better,
but from our experience, sync-to-the-first-await is clearly the better
trade-off for Dart.
cramert replies (note that some of this syntax is outdated now):
If you need code to execute immediately when a function is called
rather than later on when the future is polled, you can write your
function like this:
fn foo() -> impl Future<Item=Thing> {
println!("prints immediately");
async_block! {
println!("prints when the future is first polled");
await!(bar());
await!(baz())
}
}
Code examples
These examples use the async support in Rust 1.39 and the futures crate 0.3.1.
Literal transcription of the C# code
use futures; // 0.3.1
async fn long_running_operation(a: u8, b: u8) -> u8 {
println!("long_running_operation");
a + b
}
fn another_operation(c: u8, d: u8) -> u8 {
println!("another_operation");
c * d
}
async fn foo() -> u8 {
println!("foo");
let sum = long_running_operation(1, 2);
another_operation(3, 4);
sum.await
}
fn main() {
let task = foo();
futures::executor::block_on(async {
let v = task.await;
println!("Result: {}", v);
});
}
If you called foo, the sequence of events in Rust would be:
Something implementing Future<Output = u8> is returned.
That's it. No "actual" work is done yet. If you take the result of foo and drive it towards completion (by polling it, in this case via futures::executor::block_on), then the next steps are:
Something implementing Future<Output = u8> is returned from calling long_running_operation (it does not start work yet).
another_operation does work as it is synchronous.
the .await syntax causes the code in long_running_operation to start. The foo future will continue to return "not ready" until the computation is done.
The output would be:
foo
another_operation
long_running_operation
Result: 3
Note that there are no thread pools here: this is all done on a single thread.
async blocks
You can also use async blocks:
use futures::{future, FutureExt}; // 0.3.1
fn long_running_operation(a: u8, b: u8) -> u8 {
println!("long_running_operation");
a + b
}
fn another_operation(c: u8, d: u8) -> u8 {
println!("another_operation");
c * d
}
async fn foo() -> u8 {
println!("foo");
let sum = async { long_running_operation(1, 2) };
let oth = async { another_operation(3, 4) };
let both = future::join(sum, oth).map(|(sum, _)| sum);
both.await
}
Here we wrap synchronous code in an async block and then wait for both actions to complete before this function will be complete.
Note that wrapping synchronous code like this is not a good idea for anything that will actually take a long time; see What is the best approach to encapsulate blocking I/O in future-rs? for more info.
With a threadpool
// Requires the `thread-pool` feature to be enabled
use futures::{executor::ThreadPool, future, task::SpawnExt, FutureExt};
async fn foo(pool: &mut ThreadPool) -> u8 {
println!("foo");
let sum = pool
.spawn_with_handle(async { long_running_operation(1, 2) })
.unwrap();
let oth = pool
.spawn_with_handle(async { another_operation(3, 4) })
.unwrap();
let both = future::join(sum, oth).map(|(sum, _)| sum);
both.await
}

The purpose of async/await in Rust is to provide a toolkit for concurrency—same as in C# and other languages.
In C# and JavaScript, async methods start running immediately, and they're scheduled whether you await the result or not. In Python and Rust, when you call an async method, nothing happens (it isn't even scheduled) until you await it. But it's largely the same programming style either way.
The ability to spawn another task (that runs concurrent with and independent of the current task) is provided by libraries: see async_std::task::spawn and tokio::task::spawn.
As for why Rust async is not exactly like C#, well, consider the differences between the two languages:
Rust discourages global mutable state. In C# and JS, every async method call is implicitly added to a global mutable queue. It's a side effect to some implicit context. For better or worse, that's not Rust's style.
Rust is not a framework. It makes sense that C# provides a default event loop. It also provides a great garbage collector! Lots of things that come standard in other languages are optional libraries in Rust.

Consider this simple pseudo-JavaScript code that fetches some data, processes it, fetches some more data based on the previous step, summarises it, and then prints a result:
getData(url)
.then(response -> parseObjects(response.data))
.then(data -> findAll(data, 'foo'))
.then(foos -> getWikipediaPagesFor(foos))
.then(sumPages)
.then(sum -> console.log("sum is: ", sum));
In async/await form, that's:
async {
let response = await getData(url);
let objects = parseObjects(response.data);
let foos = findAll(objects, 'foo');
let pages = await getWikipediaPagesFor(foos);
let sum = sumPages(pages);
console.log("sum is: ", sum);
}
It introduces a lot of single-use variables and is arguably worse than the original version with promises. So why bother?
Consider this change, where the variables response and objects are needed later on in the computation:
async {
let response = await getData(url);
let objects = parseObjects(response.data);
let foos = findAll(objects, 'foo');
let pages = await getWikipediaPagesFor(foos);
let sum = sumPages(pages, objects.length);
console.log("sum is: ", sum, " and status was: ", response.status);
}
And try to rewrite it in the original form with promises:
getData(url)
.then(response -> Promise.resolve(parseObjects(response.data))
.then(objects -> Promise.resolve(findAll(objects, 'foo'))
.then(foos -> getWikipediaPagesFor(foos))
.then(pages -> sumPages(pages, objects.length)))
.then(sum -> console.log("sum is: ", sum, " and status was: ", response.status)));
Each time you need to refer back to a previous result, you need to nest the entire structure one level deeper. This can quickly become very difficult to read and maintain, but the async/await version does not suffer from this problem.

Related

how to rxswift Observable to value?

I'm currently using RIBs and ReactorKit to bind networking data.
The problem here is that the network results come out as Observables, which I have a hard time binding to ReactorKit.
Please let me know if there is a way to strip the Observable or turn it into a value.
Just like when BehaviorRelay is .value, the value comes out...
dependency.loadData.getData().flatMap { $0.detailData.flatMap { $0.result }}
====>> Obervable
now what do i do? TT
Please let me know if there is a way to strip the Observable or turn it into a value.
This is called "leaving" or "breaking" the monad and is a code smell.
In production code, it is rarely advised to 'break the monad', especially moving from an observable sequence to blocking methods. Switching between asynchronous and synchronous paradigms should be done with caution, as this is a common root cause for concurrency problems such as deadlock and scalability issues.
-- Intro to Rx
If you absolutely have to do it, then here is a way:
class MyClass {
private (set) var value: Int = 0
private let disposeBag = DisposeBag()
init(observable: Observable<Int>) {
observable
.subscribe(onNext: { [weak self] new in
self?.value = new
}
.disposed(by: disposeBag)
}
}
With the above, when you query value it will have the last value emitted from the observable. You risk race conditions doing this and that's up to you to deal with.
That's the direct answer to your question but it isn't the whole story. In ReactorKit, the API call should be made in your reactor's mutate() function. That function returns an Observable<Mutation> so instead of breaking the monad, you should be just mapping the API response into a Mutation which is likely a specific enum case that is then passed into your reduce() function.

RxSwift how to skip map depending on previous result?

I am trying to download some json, parse it, check some information in the json and depending one the result continue processing or not.
What's the most RxSwift idiomatic way of doing this?
URLSession.shared.rx
.data(request:request)
.observe(on: ConcurrentDispatchQueueScheduler(qos: .background))
.flatMap(parseJson) // into ModelObject
.flatMap(checkModel) // on some condition is there any way to jump into the onCompleted block? if the condition is false then execute processObject
.map(processObject)
.subscribe(
onError: { error in
print("error: \(error)")
}, onCompleted: {
print("Completed with no error")
})
.disposed(by: disposeBag)
where parseJsonis something like:
func parseJson(_ data: Data) -> Single<ModelObject>
checkModel does some checking and if some conditions are fullfilled should complete the sequence without ending in processObject
func checkModel(_ modelObject: ModelObject) -> Single<ModelObject> {
//probably single is not what I want here
}
And finally processObject
func processObject(_ modelObject: ModelObject) -> Completable {
}
This is a bit of a tough question to answer because on the one hand you ask a bog simple question about skipping a map while on the other hand you ask for "most RxSwift idiomatic way of doing this," which would require more changes than simply jumping the map.
If I just answer the basic question. The solution would be to have checkModel return a Maybe rather than a Single.
Looking at this code from a "make it more idiomatic" perspective, a few more changes need to take place. A lot of what I'm about to say comes from assumptions based on the names of the functions and expectations as to what you are trying to accomplish. I will try to call out those assumptions as I go along...
The .observe(on: ConcurrentDispatchQueueScheduler(qos: .background)) is likely not necessary. URLSession already emits on the background.
The parseJson function probably should not return an Observable type at all. It should just return a ModelObject. This assumes that the function is pure; that it doesn't perform any side effect and merely transforms a Data into a ModelObject.
func parseJson(_ data: Data) throws -> ModelObject
The checkModel function should probably not return an Observable type. This really sounds like it should return a Bool and be used to filter the model objects that don't need further processing out. Here I'm assuming again that the function is pure, it doesn't perform any side-effect, it just checks the model.
func checkModel(_ modelObject: ModelObject) -> Bool
Lastly, the processObject function presumably has side effects. It's likely a consumer of data and therefore shouldn't return anything at all (i.e., it should return Void.)
func processObject(_ modelObject: ModelObject)
Udpdate: In your comments you say you want to end with a Completable. Even so, I would not want this function to return a completable because that would make it lazy and thus require you to subscribe even when you just want to call it for its effects.
You can create a generic wrap operator to make any side-effecting function into a Completable:
extension Completable {
static func wrap<T>(_ fn: #escaping (T) -> Void) -> (T) -> Completable {
{ element in
fn(element)
return Completable.empty()
}
}
}
If the above functions are adjusted as discussed above, then the Observable chain becomes:
let getAndProcess = URLSession.shared.rx.data(request:request)
.map(parseJson)
.filter(checkModel)
.flatMap(Completable.wrap(processObject))
.asCompletable()
The above will produce a Completable that will execute the flow every time it's subscribed to.
By setting things up this way, you will find that your base functions are far easier to test. You don't need any special infrastructure, not even RxText to make sure they are correct. Also, it is clear this way that parseJson and checkModel aren't performing any side effects.
The idea is to have a "Functional Core, Imperative Shell". The imperative bits (in this case the data request and the processing) are moved out to the edges while the core of the subscription is kept purely functional and easy to test/understand.

Observable unsubscribe inside subscribe method

I have tried to unsubscribe within the subscribe method. It seems like it works, I haven't found an example on the internet that you can do it this way.
I know that there are many other possibilities to unsubscribe the method or to limit it with pipes. Please do not suggest any other solution, but answer why you shouldn't do that or is it a possible way ?
example:
let localSubscription = someObservable.subscribe(result => {
this.result = result;
if (localSubscription && someStatement) {
localSubscription.unsubscribe();
}
});
The problem
Sometimes the pattern you used above will work and sometimes it won't. Here are two examples, you can try to run them yourself. One will throw an error and the other will not.
const subscription = of(1,2,3,4,5).pipe(
tap(console.log)
).subscribe(v => {
if(v === 4) subscription.unsubscribe();
});
The output:
1
2
3
4
Error: Cannot access 'subscription' before initialization
Something similar:
const subscription = of(1,2,3,4,5).pipe(
tap(console.log),
delay(0)
).subscribe(v => {
if (v === 4) subscription.unsubscribe();
});
The output:
1
2
3
4
This time you don't get an error, but you also unsubscribed before the 5 was emitted from the source observable of(1,2,3,4,5)
Hidden Constraints
If you're familiar with Schedulers in RxJS, you might immediately be able to spot the extra hidden information that allows one example to work while the other doesn't.
delay (Even a delay of 0 milliseconds) returns an Observable that uses an asynchronous scheduler. This means, in effect, that the current block of code will finish execution before the delayed observable has a chance to emit.
This guarantees that in a single-threaded environment (like the Javascript runtime found in browsers currently) your subscription has been initialized.
The Solutions
1. Keep a fragile codebase
One possible solution is to just ignore common wisdom and continue to use this pattern for unsubscribing. To do so, you and anyone on your team that might use your code for reference or might someday need to maintain your code must take on the extra cognitive load of remembering which observable use the correct scheduler.
Changing how an observable transforms data in one part of your application may cause unexpected errors in every part of the application that relies on this data being supplied by an asynchronous scheduler.
For example: code that runs fine when querying a server may break when synchronously returned a cashed result. What seems like an optimization, now wreaks havoc in your codebase. When this sort of error appears, the source can be rather difficult to track down.
Finally, if ever browsers (or you're running code in Node.js) start to support multi-threaded environments, your code will either have to make do without that enhancement or be re-written.
2. Making "unsubscribe inside subscription callback" a safe pattern
Idiomatic RxJS code tries to be schedular agnostic wherever possible.
Here is how you might use the pattern above without worrying about which scheduler an observable is using. This is effectively scheduler agnostic, though it likely complicates a rather simple task much more than it needs to.
const stream = publish()(of(1,2,3,4,5));
const subscription = stream.pipe(
tap(console.log)
).subscribe(x => {
if(x === 4) subscription.unsubscribe();
});
stream.connect();
This lets you use a "unsubscribe inside a subscription" pattern safely. This will always work regardless of the scheduler and would continue to work if (for example) you put your code in a multi-threaded environment (The delay example above may break, but this will not).
3. RxJS Operators
The best solutions will be those that use operators that handle subscription/unsubscription on your behalf. They require no extra cognitive load in the best circumstances and manage to contain/manage errors relatively well (less spooky action at a distance) in the more exotic circumstances.
Most higher-order operators do this (concat, merge, concatMap, switchMap, mergeMap, ect). Other operators like take, takeUntil, takeWhile, ect let you use a more declarative style to manage subscriptions.
Where possible, these are preferable as they're all less likely to cause strange errors or confusion within a team that is using them.
The examples above re-written:
of(1,2,3,4,5).pipe(
tap(console.log)
first(v => v === 4)
).subscribe();
It's working method, but RxJS mainly recommend use async pipe in Angular. That's the perfect solution. In your example you assign result to the object property and that's not a good practice.
If you use your variable in the template, then just use async pipe. If you don't, just make it observable in that way:
private readonly result$ = someObservable.pipe(/...get exactly what you need here.../)
And then you can use your result$ in cases when you need it: in other observable or template.
Also you can use pipe(take(1)) or pipe(first()) for unsubscribing. There are also some other pipe methods allowing you unsubscribe without additional code.
There are various ways of unsubscribing data:
Method 1: Unsubscribe after subscription; (Not preferred)
let localSubscription = someObservable.subscribe(result => {
this.result = result;
}).unsubscribe();
---------------------
Method 2: If you want only first one or 2 values, use take operator or first operator
a) let localSubscription =
someObservable.pipe(take(1)).subscribe(result => {
this.result = result;
});
b) let localSubscription =
someObservable.pipe(first()).subscribe(result => {
this.result = result;
});
---------------------
Method 3: Use Subscription and unsubscribe in your ngOnDestroy();
let localSubscription =
someObservable.subscribe(result => {
this.result = result;
});
ngOnDestroy() { this.localSubscription.unsubscribe() }
----------------------
Method 4: Use Subject and takeUntil Operator and destroy in ngOnDestroy
let destroySubject: Subject<any> = new Subject();
let localSubscription =
someObservable.pipe(takeUntil(this.destroySubject)).subscribe(result => {
this.result = result;
});
ngOnDestroy() {
this.destroySubject.next();
this.destroySubject.complete();
}
I would personally prefer method 4, because you can use the same destroy subject for multiple subscriptions if you have in a single page.

What is the difference between "context" and "with_context" in anyhow?

This is the documentation for anyhow's Context:
/// Wrap the error value with additional context.
fn context<C>(self, context: C) -> Result<T, Error>
where
C: Display + Send + Sync + 'static;
/// Wrap the error value with additional context that is evaluated lazily
/// only once an error does occur.
fn with_context<C, F>(self, f: F) -> Result<T, Error>
where
C: Display + Send + Sync + 'static,
F: FnOnce() -> C;
In practice, the difference is that with_context requires a closure, as shown in anyhow's README:
use anyhow::{Context, Result};
fn main() -> Result<()> {
// ...
it.detach().context("Failed to detach the important thing")?;
let content = std::fs::read(path)
.with_context(|| format!("Failed to read instrs from {}", path))?;
// ...
}
But it looks like I can replace the with_context method with context, get rid of the closure by deleting ||, and the behaviour of the program wouldn't change.
What is the difference between the two methods under the hood?
The closure provided to with_context is evaluated lazily, and the reason you'd use with_context over context is the same reason you'd choose to lazily evaluate anything: it rarely happens and it's expensive to compute. Once those conditions are satisfied then with_context becomes preferable over context. Commented pseudo-example:
fn calculate_expensive_context() -> Result<()> {
// really expensive
std::thread::sleep(std::time::Duration::from_secs(1));
todo!()
}
// eagerly evaluated expensive context
// this function ALWAYS takes 1+ seconds to execute
// consistently terrible performance
fn failable_operation_eager_context(some_struct: Struct) -> Result<()> {
some_struct
.some_failable_action()
.context(calculate_expensive_context())
}
// lazily evaluated expensive context
// function returns instantly, only takes 1+ seconds on failure
// great performance for average case, only terrible performance on error cases
fn failable_operation_lazy_context(some_struct: Struct) -> Result<()> {
some_struct
.some_failable_action()
.with_context(|| calculate_expensive_context())
}
As the documentation for anyhow::Context::with_context states:
Wrap the error value with additional context that is evaluated lazily only once an error does occur.
If what is passed to context might be computationally expensive, it is better to use with_context, as the closure passed is evaluated only when with_context is called. This is referred to as being evaluated in a lazy and not eager manner.
Similar behavior exists in the standard library, e.g:
Eager
Lazy
Option::or
Option::or_else
Option::unwrap_or
Option::unwrap_or_else
Option::map_or
Option::map_or_else

Why does the querySkuDetails need to run in IO context?

According to https://developer.android.com/google/play/billing/integrate the billingClient.querySkuDetails is called with withContext(Dispatchers.IO)
fun querySkuDetails() {
val skuList = ArrayList<String>()
skuList.add("premium_upgrade")
skuList.add("gas")
val params = SkuDetailsParams.newBuilder()
params.setSkusList(skuList).setType(SkuType.INAPP)
val skuDetailsResult = withContext(Dispatchers.IO) {
billingClient.querySkuDetails(params.build())
}
// Process the result.
}
I am curious which benefits it gives as querySkuDetails is already a suspending function. So what do i gain here.
I could write the same code with
val skuDetailsResult = coroutineScope {
billingClient.querySkuDetails(params.build())
}
There is no more context and i don't know how to download the source code of the billing client.
The underlying method being called is querySkuDetailsAsync which takes a callback and performs the network request asynchronously.
You are correct that withContext(Dispatchers.IO) is not needed there, it actually introduces unnecessary overhead.
Gotten from https://stackoverflow.com/a/62182736/6167844
It seems to be a common misconception, that just because IO is being performed by a suspend function, you must call it in Dispatchers.IO, which is unnecessary (and can be expensive).
suspending functions by convention don't block the calling thread and internally blocks in Dispatchers.IO if need be.

Resources