RxSwift how to skip map depending on previous result? - rx-swift

I am trying to download some json, parse it, check some information in the json and depending one the result continue processing or not.
What's the most RxSwift idiomatic way of doing this?
URLSession.shared.rx
.data(request:request)
.observe(on: ConcurrentDispatchQueueScheduler(qos: .background))
.flatMap(parseJson) // into ModelObject
.flatMap(checkModel) // on some condition is there any way to jump into the onCompleted block? if the condition is false then execute processObject
.map(processObject)
.subscribe(
onError: { error in
print("error: \(error)")
}, onCompleted: {
print("Completed with no error")
})
.disposed(by: disposeBag)
where parseJsonis something like:
func parseJson(_ data: Data) -> Single<ModelObject>
checkModel does some checking and if some conditions are fullfilled should complete the sequence without ending in processObject
func checkModel(_ modelObject: ModelObject) -> Single<ModelObject> {
//probably single is not what I want here
}
And finally processObject
func processObject(_ modelObject: ModelObject) -> Completable {
}

This is a bit of a tough question to answer because on the one hand you ask a bog simple question about skipping a map while on the other hand you ask for "most RxSwift idiomatic way of doing this," which would require more changes than simply jumping the map.
If I just answer the basic question. The solution would be to have checkModel return a Maybe rather than a Single.
Looking at this code from a "make it more idiomatic" perspective, a few more changes need to take place. A lot of what I'm about to say comes from assumptions based on the names of the functions and expectations as to what you are trying to accomplish. I will try to call out those assumptions as I go along...
The .observe(on: ConcurrentDispatchQueueScheduler(qos: .background)) is likely not necessary. URLSession already emits on the background.
The parseJson function probably should not return an Observable type at all. It should just return a ModelObject. This assumes that the function is pure; that it doesn't perform any side effect and merely transforms a Data into a ModelObject.
func parseJson(_ data: Data) throws -> ModelObject
The checkModel function should probably not return an Observable type. This really sounds like it should return a Bool and be used to filter the model objects that don't need further processing out. Here I'm assuming again that the function is pure, it doesn't perform any side-effect, it just checks the model.
func checkModel(_ modelObject: ModelObject) -> Bool
Lastly, the processObject function presumably has side effects. It's likely a consumer of data and therefore shouldn't return anything at all (i.e., it should return Void.)
func processObject(_ modelObject: ModelObject)
Udpdate: In your comments you say you want to end with a Completable. Even so, I would not want this function to return a completable because that would make it lazy and thus require you to subscribe even when you just want to call it for its effects.
You can create a generic wrap operator to make any side-effecting function into a Completable:
extension Completable {
static func wrap<T>(_ fn: #escaping (T) -> Void) -> (T) -> Completable {
{ element in
fn(element)
return Completable.empty()
}
}
}
If the above functions are adjusted as discussed above, then the Observable chain becomes:
let getAndProcess = URLSession.shared.rx.data(request:request)
.map(parseJson)
.filter(checkModel)
.flatMap(Completable.wrap(processObject))
.asCompletable()
The above will produce a Completable that will execute the flow every time it's subscribed to.
By setting things up this way, you will find that your base functions are far easier to test. You don't need any special infrastructure, not even RxText to make sure they are correct. Also, it is clear this way that parseJson and checkModel aren't performing any side effects.
The idea is to have a "Functional Core, Imperative Shell". The imperative bits (in this case the data request and the processing) are moved out to the edges while the core of the subscription is kept purely functional and easy to test/understand.

Related

How do I use multiple reactive streams in the same pipeline?

I'm using WebFlux to pull data from two different REST endpoints, and trying to correlate some data from one stream with the other. I have Flux instances called events and egvs and for each event, I want to find the EGV with the nearest timestamp.
final Flux<Tuple2<Double,Object>> data = events
.map(e -> Tuples.of(e.getValue(),
egvs.map(egv -> Tuples.of(egv.getValue(),
Math.abs(Duration.between(e.getDisplayTime(),
egv.getDisplayTime()).toSeconds())))
.sort(Comparator.comparingLong(Tuple2::getT2))
.take(1)
.map(v -> v.getT1())));
When I send data to my Thymeleaf template, the first element of the tuple renders as a number, as I'd expect, but the second element renders as a FluxMapFuseable. It appears that the egvs.map(...) portion of the pipeline isn't executing. How do I get that part of the pipeline to execute?
UPDATE
Thanks, #Toerktumlare - your answer helped me figure out that my approach was wrong. On each iteration through the map operation, the event needs the context of the entire set of EGVs to find the one it matches with. So the working code looks like this:
final Flux<Tuple2<Double, Double>> data =
Flux.zip(events, egvs.collectList().repeat())
.map(t -> Tuples.of(
// Grab the event
t.getT1().getValue(),
// Find the EGV (from the full set of EGVs) with the closest timestamp
t.getT2().stream()
.map(egv -> Tuples.of(
egv.getValue(),
Math.abs(Duration.between(
t.getT1().getDisplayTime(),
egv.getDisplayTime()).toSeconds())))
// Sort the stream of (value, time difference) tuples and
// take the smallest time difference.
.sorted(Comparator.comparingLong(Tuple2::getT2))
.map(Tuple2::getT1)
.findFirst()
.orElse(0.)));
what i think you are doing is that you are breaking the reactive chain.
During the assembly phase reactor will call each operator backwards until it finds a producer that can start producing items and i think you are breaking that chain here:
egvs.map(egv -> Tuples.of( ..., ... )
you see egvs returns something that you need to take care of and chain on to the return of events.map
I'll give you an example:
// This works because we always return from flatMap
// we keep the chain intact
Mono.just("foobar").flatMap(f -> {
return Mono.just(f)
}.subscribe(s -> {
System.out.println(s)
});
on the other hand, this behaves differently:
Mono.just("foobar").flatMap(f -> {
Mono.just("foo").doOnSuccess(s -> { System.out.println("this will never print"); });
return Mono.just(f);
});
Because in this example you can see that we ignore to take care of the return from the inner Mono thus breaking the chain.
You havn't really disclosed what evg actually is so i wont be able to give you a full answer but you should most likely do something like this:
final Flux<Tuple2<Double,Object>> data = events
// chain on egv here instead
// and then return your full tuple object instead
.map(e -> egvs.map(egv -> Tuples.of(e.getValue(), Tuples.of(egv.getValue(), Math.abs(Duration.between(e.getDisplayTime(), egv.getDisplayTime()).toSeconds())))
.sort(Comparator.comparingLong(Tuple2::getT2))
.take(1)
.map(v -> v.getT1())));
I don't have compiler to check against atm. but i believe that is your problem at least. its a bit tricky to read your code.

how to rxswift Observable to value?

I'm currently using RIBs and ReactorKit to bind networking data.
The problem here is that the network results come out as Observables, which I have a hard time binding to ReactorKit.
Please let me know if there is a way to strip the Observable or turn it into a value.
Just like when BehaviorRelay is .value, the value comes out...
dependency.loadData.getData().flatMap { $0.detailData.flatMap { $0.result }}
====>> Obervable
now what do i do? TT
Please let me know if there is a way to strip the Observable or turn it into a value.
This is called "leaving" or "breaking" the monad and is a code smell.
In production code, it is rarely advised to 'break the monad', especially moving from an observable sequence to blocking methods. Switching between asynchronous and synchronous paradigms should be done with caution, as this is a common root cause for concurrency problems such as deadlock and scalability issues.
-- Intro to Rx
If you absolutely have to do it, then here is a way:
class MyClass {
private (set) var value: Int = 0
private let disposeBag = DisposeBag()
init(observable: Observable<Int>) {
observable
.subscribe(onNext: { [weak self] new in
self?.value = new
}
.disposed(by: disposeBag)
}
}
With the above, when you query value it will have the last value emitted from the observable. You risk race conditions doing this and that's up to you to deal with.
That's the direct answer to your question but it isn't the whole story. In ReactorKit, the API call should be made in your reactor's mutate() function. That function returns an Observable<Mutation> so instead of breaking the monad, you should be just mapping the API response into a Mutation which is likely a specific enum case that is then passed into your reduce() function.

What are the performance implications of .await on a Ready future? [duplicate]

In a language like C#, giving this code (I am not using the await keyword on purpose):
async Task Foo()
{
var task = LongRunningOperationAsync();
// Some other non-related operation
AnotherOperation();
result = task.Result;
}
In the first line, the long operation is run in another thread, and a Task is returned (that is a future). You can then do another operation that will run in parallel of the first one, and at the end, you can wait for the operation to be finished. I think that it is also the behavior of async/await in Python, JavaScript, etc.
On the other hand, in Rust, I read in the RFC that:
A fundamental difference between Rust's futures and those from other languages is that Rust's futures do not do anything unless polled. The whole system is built around this: for example, cancellation is dropping the future for precisely this reason. In contrast, in other languages, calling an async fn spins up a future that starts executing immediately.
In this situation, what is the purpose of async/await in Rust? Seeing other languages, this notation is a convenient way to run parallel operations, but I cannot see how it works in Rust if the calling of an async function does not run anything.
You are conflating a few concepts.
Concurrency is not parallelism, and async and await are tools for concurrency, which may sometimes mean they are also tools for parallelism.
Additionally, whether a future is immediately polled or not is orthogonal to the syntax chosen.
async / await
The keywords async and await exist to make creating and interacting with asynchronous code easier to read and look more like "normal" synchronous code. This is true in all of the languages that have such keywords, as far as I am aware.
Simpler code
This is code that creates a future that adds two numbers when polled
before
fn long_running_operation(a: u8, b: u8) -> impl Future<Output = u8> {
struct Value(u8, u8);
impl Future for Value {
type Output = u8;
fn poll(self: Pin<&mut Self>, _ctx: &mut Context) -> Poll<Self::Output> {
Poll::Ready(self.0 + self.1)
}
}
Value(a, b)
}
after
async fn long_running_operation(a: u8, b: u8) -> u8 {
a + b
}
Note that the "before" code is basically the implementation of today's poll_fn function
See also Peter Hall's answer about how keeping track of many variables can be made nicer.
References
One of the potentially surprising things about async/await is that it enables a specific pattern that wasn't possible before: using references in futures. Here's some code that fills up a buffer with a value in an asynchronous manner:
before
use std::io;
fn fill_up<'a>(buf: &'a mut [u8]) -> impl Future<Output = io::Result<usize>> + 'a {
futures::future::lazy(move |_| {
for b in buf.iter_mut() { *b = 42 }
Ok(buf.len())
})
}
fn foo() -> impl Future<Output = Vec<u8>> {
let mut data = vec![0; 8];
fill_up(&mut data).map(|_| data)
}
This fails to compile:
error[E0597]: `data` does not live long enough
--> src/main.rs:33:17
|
33 | fill_up_old(&mut data).map(|_| data)
| ^^^^^^^^^ borrowed value does not live long enough
34 | }
| - `data` dropped here while still borrowed
|
= note: borrowed value must be valid for the static lifetime...
error[E0505]: cannot move out of `data` because it is borrowed
--> src/main.rs:33:32
|
33 | fill_up_old(&mut data).map(|_| data)
| --------- ^^^ ---- move occurs due to use in closure
| | |
| | move out of `data` occurs here
| borrow of `data` occurs here
|
= note: borrowed value must be valid for the static lifetime...
after
use std::io;
async fn fill_up(buf: &mut [u8]) -> io::Result<usize> {
for b in buf.iter_mut() { *b = 42 }
Ok(buf.len())
}
async fn foo() -> Vec<u8> {
let mut data = vec![0; 8];
fill_up(&mut data).await.expect("IO failed");
data
}
This works!
Calling an async function does not run anything
The implementation and design of a Future and the entire system around futures, on the other hand, is unrelated to the keywords async and await. Indeed, Rust has a thriving asynchronous ecosystem (such as with Tokio) before the async / await keywords ever existed. The same was true for JavaScript.
Why aren't Futures polled immediately on creation?
For the most authoritative answer, check out this comment from withoutboats on the RFC pull request:
A fundamental difference between Rust's futures and those from other
languages is that Rust's futures do not do anything unless polled. The
whole system is built around this: for example, cancellation is
dropping the future for precisely this reason. In contrast, in other
languages, calling an async fn spins up a future that starts executing
immediately.
A point about this is that async & await in Rust are not inherently
concurrent constructions. If you have a program that only uses async &
await and no concurrency primitives, the code in your program will
execute in a defined, statically known, linear order. Obviously, most
programs will use some kind of concurrency to schedule multiple,
concurrent tasks on the event loop, but they don't have to. What this
means is that you can - trivially - locally guarantee the ordering of
certain events, even if there is nonblocking IO performed in between
them that you want to be asynchronous with some larger set of nonlocal
events (e.g. you can strictly control ordering of events inside of a
request handler, while being concurrent with many other request
handlers, even on two sides of an await point).
This property gives Rust's async/await syntax the kind of local
reasoning & low-level control that makes Rust what it is. Running up
to the first await point would not inherently violate that - you'd
still know when the code executed, it would just execute in two
different places depending on whether it came before or after an
await. However, I think the decision made by other languages to start
executing immediately largely stems from their systems which
immediately schedule a task concurrently when you call an async fn
(for example, that's the impression of the underlying problem I got
from the Dart 2.0 document).
Some of the Dart 2.0 background is covered by this discussion from munificent:
Hi, I'm on the Dart team. Dart's async/await was designed mainly by
Erik Meijer, who also worked on async/await for C#. In C#, async/await
is synchronous to the first await. For Dart, Erik and others felt that
C#'s model was too confusing and instead specified that an async
function always yields once before executing any code.
At the time, I and another on my team were tasked with being the
guinea pigs to try out the new in-progress syntax and semantics in our
package manager. Based on that experience, we felt async functions
should run synchronously to the first await. Our arguments were
mostly:
Always yielding once incurs a performance penalty for no good reason. In most cases, this doesn't matter, but in some it really
does. Even in cases where you can live with it, it's a drag to bleed a
little perf everywhere.
Always yielding means certain patterns cannot be implemented using async/await. In particular, it's really common to have code like
(pseudo-code here):
getThingFromNetwork():
if (downloadAlreadyInProgress):
return cachedFuture
cachedFuture = startDownload()
return cachedFuture
In other words, you have an async operation that you can call multiple times before it completes. Later calls use the same
previously-created pending future. You want to ensure you don't start
the operation multiple times. That means you need to synchronously
check the cache before starting the operation.
If async functions are async from the start, the above function can't use async/await.
We pleaded our case, but ultimately the language designers stuck with
async-from-the-top. This was several years ago.
That turned out to be the wrong call. The performance cost is real
enough that many users developed a mindset that "async functions are
slow" and started avoiding using it even in cases where the perf hit
was affordable. Worse, we see nasty concurrency bugs where people
think they can do some synchronous work at the top of a function and
are dismayed to discover they've created race conditions. Overall, it
seems users do not naturally assume an async function yields before
executing any code.
So, for Dart 2, we are now taking the very painful breaking change to
change async functions to be synchronous to the first await and
migrating all of our existing code through that transition. I'm glad
we're making the change, but I really wish we'd done the right thing
on day one.
I don't know if Rust's ownership and performance model place different
constraints on you where being async from the top really is better,
but from our experience, sync-to-the-first-await is clearly the better
trade-off for Dart.
cramert replies (note that some of this syntax is outdated now):
If you need code to execute immediately when a function is called
rather than later on when the future is polled, you can write your
function like this:
fn foo() -> impl Future<Item=Thing> {
println!("prints immediately");
async_block! {
println!("prints when the future is first polled");
await!(bar());
await!(baz())
}
}
Code examples
These examples use the async support in Rust 1.39 and the futures crate 0.3.1.
Literal transcription of the C# code
use futures; // 0.3.1
async fn long_running_operation(a: u8, b: u8) -> u8 {
println!("long_running_operation");
a + b
}
fn another_operation(c: u8, d: u8) -> u8 {
println!("another_operation");
c * d
}
async fn foo() -> u8 {
println!("foo");
let sum = long_running_operation(1, 2);
another_operation(3, 4);
sum.await
}
fn main() {
let task = foo();
futures::executor::block_on(async {
let v = task.await;
println!("Result: {}", v);
});
}
If you called foo, the sequence of events in Rust would be:
Something implementing Future<Output = u8> is returned.
That's it. No "actual" work is done yet. If you take the result of foo and drive it towards completion (by polling it, in this case via futures::executor::block_on), then the next steps are:
Something implementing Future<Output = u8> is returned from calling long_running_operation (it does not start work yet).
another_operation does work as it is synchronous.
the .await syntax causes the code in long_running_operation to start. The foo future will continue to return "not ready" until the computation is done.
The output would be:
foo
another_operation
long_running_operation
Result: 3
Note that there are no thread pools here: this is all done on a single thread.
async blocks
You can also use async blocks:
use futures::{future, FutureExt}; // 0.3.1
fn long_running_operation(a: u8, b: u8) -> u8 {
println!("long_running_operation");
a + b
}
fn another_operation(c: u8, d: u8) -> u8 {
println!("another_operation");
c * d
}
async fn foo() -> u8 {
println!("foo");
let sum = async { long_running_operation(1, 2) };
let oth = async { another_operation(3, 4) };
let both = future::join(sum, oth).map(|(sum, _)| sum);
both.await
}
Here we wrap synchronous code in an async block and then wait for both actions to complete before this function will be complete.
Note that wrapping synchronous code like this is not a good idea for anything that will actually take a long time; see What is the best approach to encapsulate blocking I/O in future-rs? for more info.
With a threadpool
// Requires the `thread-pool` feature to be enabled
use futures::{executor::ThreadPool, future, task::SpawnExt, FutureExt};
async fn foo(pool: &mut ThreadPool) -> u8 {
println!("foo");
let sum = pool
.spawn_with_handle(async { long_running_operation(1, 2) })
.unwrap();
let oth = pool
.spawn_with_handle(async { another_operation(3, 4) })
.unwrap();
let both = future::join(sum, oth).map(|(sum, _)| sum);
both.await
}
The purpose of async/await in Rust is to provide a toolkit for concurrency—same as in C# and other languages.
In C# and JavaScript, async methods start running immediately, and they're scheduled whether you await the result or not. In Python and Rust, when you call an async method, nothing happens (it isn't even scheduled) until you await it. But it's largely the same programming style either way.
The ability to spawn another task (that runs concurrent with and independent of the current task) is provided by libraries: see async_std::task::spawn and tokio::task::spawn.
As for why Rust async is not exactly like C#, well, consider the differences between the two languages:
Rust discourages global mutable state. In C# and JS, every async method call is implicitly added to a global mutable queue. It's a side effect to some implicit context. For better or worse, that's not Rust's style.
Rust is not a framework. It makes sense that C# provides a default event loop. It also provides a great garbage collector! Lots of things that come standard in other languages are optional libraries in Rust.
Consider this simple pseudo-JavaScript code that fetches some data, processes it, fetches some more data based on the previous step, summarises it, and then prints a result:
getData(url)
.then(response -> parseObjects(response.data))
.then(data -> findAll(data, 'foo'))
.then(foos -> getWikipediaPagesFor(foos))
.then(sumPages)
.then(sum -> console.log("sum is: ", sum));
In async/await form, that's:
async {
let response = await getData(url);
let objects = parseObjects(response.data);
let foos = findAll(objects, 'foo');
let pages = await getWikipediaPagesFor(foos);
let sum = sumPages(pages);
console.log("sum is: ", sum);
}
It introduces a lot of single-use variables and is arguably worse than the original version with promises. So why bother?
Consider this change, where the variables response and objects are needed later on in the computation:
async {
let response = await getData(url);
let objects = parseObjects(response.data);
let foos = findAll(objects, 'foo');
let pages = await getWikipediaPagesFor(foos);
let sum = sumPages(pages, objects.length);
console.log("sum is: ", sum, " and status was: ", response.status);
}
And try to rewrite it in the original form with promises:
getData(url)
.then(response -> Promise.resolve(parseObjects(response.data))
.then(objects -> Promise.resolve(findAll(objects, 'foo'))
.then(foos -> getWikipediaPagesFor(foos))
.then(pages -> sumPages(pages, objects.length)))
.then(sum -> console.log("sum is: ", sum, " and status was: ", response.status)));
Each time you need to refer back to a previous result, you need to nest the entire structure one level deeper. This can quickly become very difficult to read and maintain, but the async/await version does not suffer from this problem.

Pattern for Observables that includes acknowledgement

I'm working on something that is recording data coming from a queue. It was easy enough to process the queue into an Observable so that I can have multiple endpoints in my code receiving the information in the queue.
Furthermore, I can be sure that the information arrives in order. That bit works nicely as well since the Observables ensure that. But, one tricky bit is that I don't want the Observer to be notified of the next thing until it has completed processing the previous thing. But the processing done by the Observer is asynchronous.
As a more concrete example that is probably simple enough to follow. Imagine my queue contains URLs. I'm exposing those as an Observable in my code. The I subscribe an Observer whose job is to fetch the URLs and write the content to disk (this is a contrived example, so don't take issue with these specifics). The important point is that fetching and saving are async. My problem is that I don't want the observer to be given the "next" URL from the Observable until they have completed the previous processing.
But the call to next on the Observer interface returns void. So there is no way for the Observer to communicate back to me that has actually completed the async task.
Any suggestions? I suspect there is probably some kind of operator that could be coded up that would basically withhold future values (queue them up in memory?) until it somehow knew the Observer was ready for it. But I was hoping something like that already existed following some established pattern.
similar use case i ran into before
window.document.onkeydown=(e)=>{
return false
}
let count=0;
let asyncTask=(name,time)=>{
time=time || 2000
return Rx.Observable.create(function(obs) {
setTimeout(function() {
count++
obs.next('task:'+name+count);
console.log('Task:',count ,' ', time, 'task complete')
obs.complete();
}, time);
});
}
let subject=new Rx.Subject()
let queueExec$=new Rx.Subject()
Rx.Observable.fromEvent(btnA, 'click').subscribe(()=>{
queueExec$.next(asyncTask('A',4000))
})
Rx.Observable.fromEvent(btnB, 'click').subscribe(()=>{
queueExec$.next(asyncTask('B',4000))
})
Rx.Observable.fromEvent(btnC, 'click').subscribe(()=>{
queueExec$.next(asyncTask('C',4000))
})
queueExec$.concatMap(value=>value)
.subscribe(function(data) {
console.log('onNext', data);
},
function(error) {
console.log('onError', error);
},function(){
console.log('completed')
});
What you describe sounds like "backpressure". You can read about it in RxJS 4 documentation https://github.com/Reactive-Extensions/RxJS/blob/master/doc/gettingstarted/backpressure.md. However this is mentioning operators that don't exist in RxJS 5. For example have a look at "Controlled Observables" that should refer to what you need.
I think you could achieve the same with concatMap and an instance of Subject:
const asyncOperationEnd = new Subject();
source.concatMap(val => asyncOperationEnd
.mapTo(void 0)
.startWith(val)
.take(2) // that's `val` and the `void 0` that ends this inner Observable
)
.filter(Boolean) // Always ignore `void 0`
.subscribe(val => {
// do some async operation...
// call `asyncOperationEnd.next()` and let `concatMap` process another value
});
Fro your description it actually seems like the "observer" you're mentioning works like Subject so it would make maybe more sense to make a custom Subject class that you could use in any Observable chain.
Isn't this just concatMap?
// Requests are coming in a stream, with small intervals or without any.
const requests=Rx.Observable.of(2,1,16,8,16)
.concatMap(v=>Rx.Observable.timer(1000).mapTo(v));
// Fetch, it takes some time.
function fetch(query){
return Rx.Observable.timer(100*query)
.mapTo('!'+query).startWith('?'+query);
}
requests.concatMap(q=>fetch(q));
https://rxviz.com/v/Mog1rmGJ
If you want to allow multiple fetches simultaneously, use mergeMap with concurrency parameter.

Q Promises - Create a dynamic promise chain then trigger it

I am wondering if there's a way to create a promise chain that I can build based on a series of if statements and somehow trigger it at the end. For example:
// Get response from some call
callback = (response) {
var chain = Q(response.userData)
if (!response.connected) {
chain = chain.then(connectUser)
}
if (!response.exists) {
chain = chain.then(addUser)
}
// etc...
// Finally somehow trigger the chain
chain.trigger().then(successCallback, failCallback)
}
A promise represents an operation that has already started. You can't trigger() a promise chain, since the promise chain is already running.
While you can get around this by creating a deferred and then queuing around it and eventually resolving it later - this is not optimal. If you drop the .trigger from the last line though, I suspect your task will work as expected - the only difference is that it will queue the operations and start them rather than wait:
var q = Q();
if(false){
q = q.then(function(el){ return Q.delay(1000,"Hello");
} else {
q = q.then(function(el){ return Q.delay(1000,"Hi");
}
q.then(function(res){
console.log(res); // logs "Hi"
});
The key points here are:
A promise represents an already started operation.
You can append .then handlers to a promise even after it resolved and it will still execute predictably.
Good luck, and happy coding
As Benjamin says ...
... but you might also like to consider something slightly different. Try turning the code inside-out; build the then chain unconditionally and perform the tests inside the .then() callbacks.
function foo(response) {
return = Q().then(function() {
return (response.connected) ? null : connectUser(response.userData);
}).then(function() {
return (response.exists) ? null : addUser(response.userData);//assuming addUser() accepts response.userData
});
}
I think you will get away with returning nulls - if null doesn't work, then try Q() (in two places).
If my assumption about what is passed to addUser() is correct, then you don't need to worry about passing data down the chain - response remains available in the closure formed by the outer function. If this assumption is incorrect, then no worries - simply arrange for connectUser to return whatever is necessary and pick it up in the second .then.
I would regard this approach to be more elegant than conditional chain building, even though it is less efficient. That said, you are unlikely ever to notice the difference.

Resources