I am wondering if there's a way to create a promise chain that I can build based on a series of if statements and somehow trigger it at the end. For example:
// Get response from some call
callback = (response) {
var chain = Q(response.userData)
if (!response.connected) {
chain = chain.then(connectUser)
}
if (!response.exists) {
chain = chain.then(addUser)
}
// etc...
// Finally somehow trigger the chain
chain.trigger().then(successCallback, failCallback)
}
A promise represents an operation that has already started. You can't trigger() a promise chain, since the promise chain is already running.
While you can get around this by creating a deferred and then queuing around it and eventually resolving it later - this is not optimal. If you drop the .trigger from the last line though, I suspect your task will work as expected - the only difference is that it will queue the operations and start them rather than wait:
var q = Q();
if(false){
q = q.then(function(el){ return Q.delay(1000,"Hello");
} else {
q = q.then(function(el){ return Q.delay(1000,"Hi");
}
q.then(function(res){
console.log(res); // logs "Hi"
});
The key points here are:
A promise represents an already started operation.
You can append .then handlers to a promise even after it resolved and it will still execute predictably.
Good luck, and happy coding
As Benjamin says ...
... but you might also like to consider something slightly different. Try turning the code inside-out; build the then chain unconditionally and perform the tests inside the .then() callbacks.
function foo(response) {
return = Q().then(function() {
return (response.connected) ? null : connectUser(response.userData);
}).then(function() {
return (response.exists) ? null : addUser(response.userData);//assuming addUser() accepts response.userData
});
}
I think you will get away with returning nulls - if null doesn't work, then try Q() (in two places).
If my assumption about what is passed to addUser() is correct, then you don't need to worry about passing data down the chain - response remains available in the closure formed by the outer function. If this assumption is incorrect, then no worries - simply arrange for connectUser to return whatever is necessary and pick it up in the second .then.
I would regard this approach to be more elegant than conditional chain building, even though it is less efficient. That said, you are unlikely ever to notice the difference.
Related
I have tried to unsubscribe within the subscribe method. It seems like it works, I haven't found an example on the internet that you can do it this way.
I know that there are many other possibilities to unsubscribe the method or to limit it with pipes. Please do not suggest any other solution, but answer why you shouldn't do that or is it a possible way ?
example:
let localSubscription = someObservable.subscribe(result => {
this.result = result;
if (localSubscription && someStatement) {
localSubscription.unsubscribe();
}
});
The problem
Sometimes the pattern you used above will work and sometimes it won't. Here are two examples, you can try to run them yourself. One will throw an error and the other will not.
const subscription = of(1,2,3,4,5).pipe(
tap(console.log)
).subscribe(v => {
if(v === 4) subscription.unsubscribe();
});
The output:
1
2
3
4
Error: Cannot access 'subscription' before initialization
Something similar:
const subscription = of(1,2,3,4,5).pipe(
tap(console.log),
delay(0)
).subscribe(v => {
if (v === 4) subscription.unsubscribe();
});
The output:
1
2
3
4
This time you don't get an error, but you also unsubscribed before the 5 was emitted from the source observable of(1,2,3,4,5)
Hidden Constraints
If you're familiar with Schedulers in RxJS, you might immediately be able to spot the extra hidden information that allows one example to work while the other doesn't.
delay (Even a delay of 0 milliseconds) returns an Observable that uses an asynchronous scheduler. This means, in effect, that the current block of code will finish execution before the delayed observable has a chance to emit.
This guarantees that in a single-threaded environment (like the Javascript runtime found in browsers currently) your subscription has been initialized.
The Solutions
1. Keep a fragile codebase
One possible solution is to just ignore common wisdom and continue to use this pattern for unsubscribing. To do so, you and anyone on your team that might use your code for reference or might someday need to maintain your code must take on the extra cognitive load of remembering which observable use the correct scheduler.
Changing how an observable transforms data in one part of your application may cause unexpected errors in every part of the application that relies on this data being supplied by an asynchronous scheduler.
For example: code that runs fine when querying a server may break when synchronously returned a cashed result. What seems like an optimization, now wreaks havoc in your codebase. When this sort of error appears, the source can be rather difficult to track down.
Finally, if ever browsers (or you're running code in Node.js) start to support multi-threaded environments, your code will either have to make do without that enhancement or be re-written.
2. Making "unsubscribe inside subscription callback" a safe pattern
Idiomatic RxJS code tries to be schedular agnostic wherever possible.
Here is how you might use the pattern above without worrying about which scheduler an observable is using. This is effectively scheduler agnostic, though it likely complicates a rather simple task much more than it needs to.
const stream = publish()(of(1,2,3,4,5));
const subscription = stream.pipe(
tap(console.log)
).subscribe(x => {
if(x === 4) subscription.unsubscribe();
});
stream.connect();
This lets you use a "unsubscribe inside a subscription" pattern safely. This will always work regardless of the scheduler and would continue to work if (for example) you put your code in a multi-threaded environment (The delay example above may break, but this will not).
3. RxJS Operators
The best solutions will be those that use operators that handle subscription/unsubscription on your behalf. They require no extra cognitive load in the best circumstances and manage to contain/manage errors relatively well (less spooky action at a distance) in the more exotic circumstances.
Most higher-order operators do this (concat, merge, concatMap, switchMap, mergeMap, ect). Other operators like take, takeUntil, takeWhile, ect let you use a more declarative style to manage subscriptions.
Where possible, these are preferable as they're all less likely to cause strange errors or confusion within a team that is using them.
The examples above re-written:
of(1,2,3,4,5).pipe(
tap(console.log)
first(v => v === 4)
).subscribe();
It's working method, but RxJS mainly recommend use async pipe in Angular. That's the perfect solution. In your example you assign result to the object property and that's not a good practice.
If you use your variable in the template, then just use async pipe. If you don't, just make it observable in that way:
private readonly result$ = someObservable.pipe(/...get exactly what you need here.../)
And then you can use your result$ in cases when you need it: in other observable or template.
Also you can use pipe(take(1)) or pipe(first()) for unsubscribing. There are also some other pipe methods allowing you unsubscribe without additional code.
There are various ways of unsubscribing data:
Method 1: Unsubscribe after subscription; (Not preferred)
let localSubscription = someObservable.subscribe(result => {
this.result = result;
}).unsubscribe();
---------------------
Method 2: If you want only first one or 2 values, use take operator or first operator
a) let localSubscription =
someObservable.pipe(take(1)).subscribe(result => {
this.result = result;
});
b) let localSubscription =
someObservable.pipe(first()).subscribe(result => {
this.result = result;
});
---------------------
Method 3: Use Subscription and unsubscribe in your ngOnDestroy();
let localSubscription =
someObservable.subscribe(result => {
this.result = result;
});
ngOnDestroy() { this.localSubscription.unsubscribe() }
----------------------
Method 4: Use Subject and takeUntil Operator and destroy in ngOnDestroy
let destroySubject: Subject<any> = new Subject();
let localSubscription =
someObservable.pipe(takeUntil(this.destroySubject)).subscribe(result => {
this.result = result;
});
ngOnDestroy() {
this.destroySubject.next();
this.destroySubject.complete();
}
I would personally prefer method 4, because you can use the same destroy subject for multiple subscriptions if you have in a single page.
I'm working on something that is recording data coming from a queue. It was easy enough to process the queue into an Observable so that I can have multiple endpoints in my code receiving the information in the queue.
Furthermore, I can be sure that the information arrives in order. That bit works nicely as well since the Observables ensure that. But, one tricky bit is that I don't want the Observer to be notified of the next thing until it has completed processing the previous thing. But the processing done by the Observer is asynchronous.
As a more concrete example that is probably simple enough to follow. Imagine my queue contains URLs. I'm exposing those as an Observable in my code. The I subscribe an Observer whose job is to fetch the URLs and write the content to disk (this is a contrived example, so don't take issue with these specifics). The important point is that fetching and saving are async. My problem is that I don't want the observer to be given the "next" URL from the Observable until they have completed the previous processing.
But the call to next on the Observer interface returns void. So there is no way for the Observer to communicate back to me that has actually completed the async task.
Any suggestions? I suspect there is probably some kind of operator that could be coded up that would basically withhold future values (queue them up in memory?) until it somehow knew the Observer was ready for it. But I was hoping something like that already existed following some established pattern.
similar use case i ran into before
window.document.onkeydown=(e)=>{
return false
}
let count=0;
let asyncTask=(name,time)=>{
time=time || 2000
return Rx.Observable.create(function(obs) {
setTimeout(function() {
count++
obs.next('task:'+name+count);
console.log('Task:',count ,' ', time, 'task complete')
obs.complete();
}, time);
});
}
let subject=new Rx.Subject()
let queueExec$=new Rx.Subject()
Rx.Observable.fromEvent(btnA, 'click').subscribe(()=>{
queueExec$.next(asyncTask('A',4000))
})
Rx.Observable.fromEvent(btnB, 'click').subscribe(()=>{
queueExec$.next(asyncTask('B',4000))
})
Rx.Observable.fromEvent(btnC, 'click').subscribe(()=>{
queueExec$.next(asyncTask('C',4000))
})
queueExec$.concatMap(value=>value)
.subscribe(function(data) {
console.log('onNext', data);
},
function(error) {
console.log('onError', error);
},function(){
console.log('completed')
});
What you describe sounds like "backpressure". You can read about it in RxJS 4 documentation https://github.com/Reactive-Extensions/RxJS/blob/master/doc/gettingstarted/backpressure.md. However this is mentioning operators that don't exist in RxJS 5. For example have a look at "Controlled Observables" that should refer to what you need.
I think you could achieve the same with concatMap and an instance of Subject:
const asyncOperationEnd = new Subject();
source.concatMap(val => asyncOperationEnd
.mapTo(void 0)
.startWith(val)
.take(2) // that's `val` and the `void 0` that ends this inner Observable
)
.filter(Boolean) // Always ignore `void 0`
.subscribe(val => {
// do some async operation...
// call `asyncOperationEnd.next()` and let `concatMap` process another value
});
Fro your description it actually seems like the "observer" you're mentioning works like Subject so it would make maybe more sense to make a custom Subject class that you could use in any Observable chain.
Isn't this just concatMap?
// Requests are coming in a stream, with small intervals or without any.
const requests=Rx.Observable.of(2,1,16,8,16)
.concatMap(v=>Rx.Observable.timer(1000).mapTo(v));
// Fetch, it takes some time.
function fetch(query){
return Rx.Observable.timer(100*query)
.mapTo('!'+query).startWith('?'+query);
}
requests.concatMap(q=>fetch(q));
https://rxviz.com/v/Mog1rmGJ
If you want to allow multiple fetches simultaneously, use mergeMap with concurrency parameter.
I am wondering whether the following is defined behavior per the Promise specification:
var H = function (c) {
this.d_p = Promise.resolve();
this.d_c = c;
};
H.prototype.q = function () {
var s = this;
return new Promise(function (resolve) {
s.d_p = s.d_p.then(function () { // (1)
s.d_c({
resolve: resolve
});
});
});
};
var a,
h = new H(function (args) { a = args; }),
p;
Promise.resolve()
.then(function () {
p = h.q();
})
.then(function () { // (2)
a.resolve(42);
return p;
});
The question is whether it's guaranteed that the then callback marked (1) is called before the then callback marked (2).
Note that both promises in question are instantly resolved, so it seems to me like the (1) then callback should be scheduled as part of calling h.q(), which should be before the promise used to resolve (2) is resolved, so it should be before (2) is scheduled.
An example jsfiddle to play about with: https://jsfiddle.net/m4ruec7o/
It seems that this is what happens with bluebird >= 2.4.1, but not prior versions. I tracked the change in behavior down to this commit: https://github.com/petkaantonov/bluebird/commit/6bbb3648edb17865a6ad89a694a3241f38b7f86e
Thanks!
You can guarantee that h.q() will be called before a.resolve(42); is called because chained .then() handlers do execute in order.
If you were asking about code within h.q(), then s.d_p.then() is part of a completely different promise chain and promise specifications do not provide ordering for separate promise chains. They are free to execute with their own asynchronous timing. And, in fact, I've seen some differences in the execution of independent promise chains in different Javascript environments.
If you want to direct the execution order between two independent promise chains, then you will have to link them somehow so one operation does not run until some other operation has completed. You can either link the two chains directly or you can do something more complicated involving an intermediate promise that is inserted into one chain so it blocks that chain until it is resolved.
You may find this answer useful What is the order of execution in javascript promises which provides a line by line analysis of execution order of both chained and independent promises and discusses how to make execution order predictable.
This question already has answers here:
Proper way to skip a then function in Q Promises
(4 answers)
Closed 6 years ago.
Using When.js, we have a situation where we want to quietly abort a promise chain mid way, due to the user changing their mind. Our current method is to simply never resolve that step of the chain - effectively leaving the other promises "hanging". This seems slightly dirty?
If we reject the promise, then of course our exception handlers kick in. We could work around that, using a custom message which we detect and ignore, but that also seems a bit unclean.
Is there a better approach?
This is what the code looks like:
return getConfirmation(confirmConversion, 'Ready to upload your file to the ' + terria.appName + ' conversion service?')
.then(function() {
return loadItem(createCatalogMemberFromType('ogr', terria), name, fileOrUrl);
});
function getConfirmation(confirmConversion, message) {
...
var d = when.defer(); // there's no `when.promise(resolver)` in when 1.7.1
PopupMessageConfirmationViewModel.open('ui', {
...
confirmAction: d.resolve,
denyAction: function() { this.close(); /* Do nothing or d.reject(); ? */ }
});
return d.promise;
}
Result
For completeness, I changed the code to:
confirmAction: function () { d.resolve(true); },
enableDeny: true,
denyAction: function() { this.close(); d.resolve(false); }
and
.then(function(confirmed) {
return confirmed ? loadItem(createCatalogMemberFromType('ogr', terria), name, fileOrUrl) : undefined;
});
Making my comment into an answer:
If you're trying to return three possible states (resolve, rejected and user cancelled) so your code can handle all three possible resolutions correctly and you're using promises, then you will have to make either the resolved value indicate that the user cancelled or the reject reason will have to indicate cancel and your code will have to check for that.
There are only two possible final states for a promise, not three so you'll have to communicate the third state in one of the other two.
I'd recommend not stranding promises in the pending state unless you're absolutely sure they won't lead to a memory leak, but even then it doesn't seem like a very clean design to just strand them.
Often I find myself writing beforeSave and afterSave using promises in this form:
beforeSavePromise: function (request) {
var recipe = request.object;
var promises = [
doFooIfNeeded(recipe),
doBarIfNeeded(recipe),
doQuuxIfNeeded(recipe)
];
return Parse.Promise.when(promises)
},
Each of these are conditional actions that perform an action only if a particular field or fields are dirty. So for example, doFooIfNeeded might look something like:
if (recipe.dirty('imageFile')) {
return /* some Promise that updates thumbnails of this image */;
} else {
return Parse.Promise.as(); // The no-op Promise. Do nothing!
}
My question is, is Parse.Promise.as() really the no-op Promise? or is new Parse.Promise() more correct?
With all "dirty" outcomes contributing a resolved promise to the aggregation, you can choose for each "clean" outcome to contribute in any of the following ways :
not to put anything in the array,
put a value in the array,
put a resolved promise in the array,
put a rejected promise in the array.
(1), (2) and (3) will guarantee that the aggregated promise resolves regardless of the clean/dirty outomes (except some unpredicted error).
(4) will cause the aggregared promise to resolve only if all outcomes are "dirty", or to reject as soon as any one "clean" outcome arises.
Realistically, the choice is between (2) and (4), depending on how you want the aggregated promise to behave. (1) would complicate the aggregation process, and (3) would be unnecessarily expensive.
It would seem appropriate for the aggregated promise to resolve when everything is either already "clean" or has been cleaned up, therefore I would suggest (2), in which case your foo()/bar()/quux() functions could be written as follows :
function foo() {
return recipe.dirty('imageFile') ? updates_thumbnails() : true; // doesn't have to be `true` or even truthy - could be almost anything except a rejected promise.
}
And aggregate the outcomes as in the question :
$.when([ foo(recipe), bar(recipe), quux(recipe) ]).then(function() {
//all thumbnails were successfully updated.
}).fail(function() {
//an unexpected error occurred in foo(), bar() or quux().
});
Parse.Promise.as() will technically give you a Promise with its state set to resolved. When you return this Promise, its callback will be triggered successfully. You can supply a value as an argument which basically triggers the callback with that value. According to the Parse guide on Promise creation, new Parse.Promise() creates a Promise that its states is neither set to resolved nor failed. This gives you the flexibility to mange its state manually as you wish.