I have a list of multiple inputs (dynamically generated - unknown number).
I want each to trigger an ajax request on every keystroke
I want these ajax requests to be queued up, so only one is sent to
the server at a time, and the next one is sent only after getting a response from the earlier one.
if new requests are triggered from an input that already has requests in the queue, I want the old ones associated with the same input to be cancelled.
if new requests are triggered from an input that does not already have inputs in the queue, I want the new requests to just be added to the end of the queue without cancelling anything.
I'm told that RxJS makes these kinds of complicated async operations easy, but I can't seem to wrap my head around all the RxJS operators.
I have queueing working with a single input below, but I don't really understand why the defer is necessary or how to queue requests for separate inputs while maintaining the switchMap-like behavior I think I want for individual inputs themselves.
Rx.Observable.fromEvent(
$("#input"),
'keyup'
)
.map((event) => {
return $("#input").val();
});
.concatMap((inputVal) => {
return Rx.Observable.defer(() => Rx.Observable.fromPromise(
fetch(myURL + inputVal)
))
.catch(() => Rx.Observable.empty());
})
.subscribe();
First of all you have to create some sort of function that manages each input. Something along the following lines
requestAtKeyStroke(inputId: string) {
return Rx.Observable.fromEvent(
$(inputId),
'keyup'
)
.map((event) => {
return $("#input").val();
})
.filter(value => value.length > 0)
.switchMap((inputVal) => Rx.Observable.fromPromise(fetch(myURL + inputVal)))
}
Such a function deals with your third requisite, to cancel requests still on fly when a new one arrives. The key here is the switchMap operator.
Then what you can do is to merge all the Observables corresponding to your inputs into one Observable. One way could be the following
Observable.from(['input1, 'input2']).map(input => requestAtKeyStroke(input)).mergeAll()
This is not fulfilling all you requisites, since you still may have more than one requests under execution at the same time, coming from different inputs. I am not sure though if it is possible to fulfill all your requisites at the same time.
Related
TLDR: Working example is in the last codeblock of this question. Check out #bryan60 answer for a working example using concat rather than mergeMap.
I'm trying to run a number of remote requests sequentially, but only the first observable is executed.
The number of request vary, so I can't do a dodgy solution where I nest observables within each other.
I'm using the following code:
const observables = [
observable1,
observable2,
...
];
from(observables).pipe(
mergeMap(ob=> {
return ob.pipe(map(res => res));
}, undefined, 1)
).subscribe(res => {
console.log('Huzzah!');
})
In the past (rxjs 5.5) Ive used the following:
let o = Observable.from(observables).mergeMap((ob) => {
return ob;
}, null, 1);
o.subscribe(res => {
console.log('Huzzah!');
})
I'm not sure what I'm doing wrong, can anybody shed some light?
An additional request would be to only print 'Huzzah!' once on completion of all requests rather than for each individual Observable.
EDIT:
Removing undefined from my original code will make it work, however there was another issue causing only the first observable to be executed.
I'm using Angular's HttpClient for remote requests. My observable code looked like this:
const observables = [];
// Only the first observable would be executed
observables.push(this.http.get(urla));
observables.push(this.http.get(urlb));
observables.push(this.http.get(urlc));
Adding .pipe(take(1)) to each observable results in each observable being executed:
const observables = [];
// All observables will now be executed
observables.push(this.http.get(urla).pipe(take(1));
observables.push(this.http.get(urlb).pipe(take(1));
observables.push(this.http.get(urlc).pipe(take(1));
The code I ended up using, which executes all observables in sequential order and only triggers Huzzah! once is:
const observables = [];
observables.push(this.http.get(urla).pipe(take(1));
observables.push(this.http.get(urlb).pipe(take(1));
observables.push(this.http.get(urlc).pipe(take(1));
from(observables).pipe(
mergeMap(ob=> {
return ob.pipe(map(res => res));
}, 1),
reduce((all: any, res: any) => all.concat(res), [])
).subscribe(res => {
console.log('Huzzah!');
})
Thanks to #bryan60 for helping me wit this issue.
if these are http requests that complete, I think your bug is caused by a change to the mergeMap signature that removed the result selector. it's hard to be sure without knowing exactly which version you're on as it was there, then removed, then added again, and they're removing it once more for good in v7.
if you want to run them sequentially... this is all you need...
// concat runs input observables sequentially
concat(...observables).subscribe(res => console.log(res))
if you want to wait till they're all done to emit, do this:
concat(...observables).pipe(
// this will gather all responses and emit them all when they're done
reduce((all, res) => all.concat([res]), [])
// if you don't care about the responses, just use last()
).subscribe(allRes => console.log(allRes))
In my personal utility rxjs lib, I always include a concatJoin operator that combines concat and reduce like this.
the only trick is that concat requires observables to complete till it moves on to the next one, but the same is true for mergeMap with concurrent subscriptions set to 1.. so that should be fine. things like http requests are fine, as they complete naturally after one emission.. websockets or subjects or event emitters will behave a bit differently and have to be manually completed, either with operators like first or take or at the source.
If you are not concerned about the sequence of execution and just want 'Huzzah!' to be printed once all the observable has been executed forkJoin can also be used.Try this.
forkJoin(...observables).subscribe(res => console.log('Huzzah');
What's the best way to handle asynchronous updates in the middle of an Observable stream.
Let's say there are 3 observables:
Obs1 (gets data from API) -> pipes to Obs2
Obs2 (transforms data) -> pipes to Obs3
Obs3 (sends transformed data)
(The actual application is more complex, and there's reasons it's not done in a single Observable, this is just a simple example).
That all works well and good if it's a linear synchronous path.
But we also have async messages that will change the output of Obs2.
3 scenarios I'm asking about are:
- we fetch data, and go through Obs1, Obs2 & Obs3
- we get a message to make a change, go through Obs2 & Obs3
- we get a different message to make a change which also needs to apply the change from the previous message, through Obs2 & Obs3
The main problem here is that there are different types of asynchronous messages that will change the outcome of Obs2, but they all need to still know what the previous outcome of Obs2 was (so the any other changes from messages that happened before is still applied)
I have tried using switchMap in Obs2 with a scan in Obs1 like this:
obs1
const obs1$ = obs1$.pipe(
// this returns a function used in the reducer.
map((data) => (prevData) => 'modifiedData',
scan((data, reducer) => reducer(betsMap), {})
)
obs2
const obs2$ = obs1$.pipe(
switchMap(data =>
someChange$.pipe(map(reducer => reducer(data)))
)
)
where someChange$ is a BehaviorSubject applying a change using another reducer function.
This works fine for async message #1 that makes some change.
But when message #2 comes in and a different change is needed, the first change is lost.
the changes that should be in "prevData" in obs1$ is always undefined because it happens before the message is applied.
How can I take the output from obs2$ and apply asynchronous updates to it that remembers what all of the past updates was? (in a way where I can clear all changes if needed)
So if i got the question right, there are two problems that this question tackles:
First: How to cache the last 2 emitted values from stream.
scan definitely is the right way, if this cache logic is needed in more than one place/file, I would go for a custom pipe operator, like the following one
function cachePipe() {
return sourceObservable =>
sourceObservable.pipe(
scan((acc, cur) => {
return acc.length === 2 ? [...acc.slice(1), cur] : [...acc, cur];
}, [])
);
}
cachePipe will always return the latest 2 values passed trough the stream.
...
.pipe(
cachePipe()
)
Second: How to access data from multiple streams at the same time, upon stream event
Here rxjs's combineLatest creation operator might do the trick for you,
combineLatest(API$, async1$ ,async2$,async3$)
.pipe(
// here I have access to an array of the last emitted value of all streams
// and the data can be passed to `Obs2` in your case
)
In the pipe I can chain whatever number of observables, which resolves the second problem.
Note:
combineLatest needs for all streams, inside of it, to emit once, before the operator strats to emit their combined value, one workaround is to use startWith operator with your input streams, another way to do it is by passing the data trough BehaviorSubject-s.
Here is a demo at CodeSandbox , that uses the cachePipe() and startWith strategy to combine the source (Obs1) with the async observables that will change the data.
I have the following epic I use in my application to handle api requests:
action$ => {
return action$.ofType(actions.requestType)
.do(() => console.log('handled epic ' + actions.requestType))
.switchMap((action) => (
Observable.create((obs) => {
obs.next({ type: type, value: action.value, form: action.form });
})
.debounceTime(250)
.switchMap((iea) => (
Observable.ajax(ajaxPost(url(iea.value), body ? body(iea.value) : action.form))
.mergeMap(payload => {
return Observable.merge(
Observable.of(actions.success(payload)),
/* some other stuff */
);
})
.catch(payload => {
return [actions.failure(payload)];
})
))
))
.takeUntil(action$.filter((a) => (a.type === masterCancelAction))
.repeat();
};
Basically, any time I perform an api request, I dispatch a request action. If I dispatch another request quickly, the previous one is ignored using debounceTime. Additionally, the request can be cancelled using the masterCancelAction and when cancelled repeat() restarts the epic. This epic works as intended in all cases expect one.
The failure case occurs when a user uses the browser back during a request. In this case I fire the masterCancelAction to the request. However, on the same execution context as a result from the masterCancelAction, another request action dispatches to perform a new request on the same epic, but the api request does not occur (the console.log does occur though) as if there was no repeat(). In other cases where cancels occur, the next request is not invoked from the same execution context and it works fine, so it seems in this case my code does not give repeat a chance to restart the epic?
A dirty workaround I found was to use setTimeout(dispatch(action), 0) on the request that dispatches after the cancellation. This seems to allow repeat() to execute. I tried passing different schedulers into repeat, but that didn't seem to help. Also, attaching takeUntil and repeat into my inner switchMap solves the problem, but then other cases where my next request does not execute in the same call stack fail.
Is there a way I can solve this problem without using setTimeout? Maybe it is not a repeat related problem, but it seems to be the case.
Using rxjs 5.0.3 and redux-observable 0.14.1.
The issue is not 100% clear without something like a jsbin to see what you mean, but I do see some general issues that might help:
Anonymous Observable never completes
When creating a custom anonymous Observable it's important to call observer.complete() if you do indeed want it to complete. In most cases, not doing so will cause the subscription to be a memory leak and might also other strange behaviors
Observable.create((observer) => {
observer.next({ type: type, value: action.value, form: action.form });
observer.complete();
})
Observable.of would have been equivalent:
Observable.of({ type: type, value: action.value, form: action.form })
However, it's not clear why this was done as the values it emits are in captured in scope.
debounceTime in this case does not debounce, it delays
Since the anonymous observable it's applied to only ever emits a single item, debounceTime will act just as a regular .delay(250). I'm betting you intended instead to debounce actions.requestType actions, in which case you'd need to apply your debouncing outside the switchMap, after the action$.ofType(actions.requestType).
Observable.of accepts any number of arguments to emit
This is more of a "did you know?" rather than an issue, but I noticed you're merging your of and /* some other actions */ I assume would be other of observables merged in. Instead, you can just return a single of and pass the actions as arguments.
Observable.of(
actions.success(payload),
/* some other actions */
actions.someOtherOne(),
actions.etc()
);
Also, when you find yourself emitting multiple actions synchronously like this, consider whether your reducers should be listening for the same, single action instead of having two or more. Sometimes this wouldn't make sense as you want them to have completely unrelated actions, just something to keep in mind that people often forget--that all reducers receive all actions and so multiple reducers can change their state from the same action.
.takeUntil will stop the epic from listening for future actions
Placing the takeUntil on the top-level observable chain causes the epic to stop listening for action$.ofType(actions.requestType), which is why you added the .repeat() after. This might work in some cases, but it's inefficient and can cause other hard to realize bugs. Epics should be thought of instead as sort of like sidecar processes that usually "start up" with the app and then continue listening for a particular action until the app "shuts down" aka the user leaves the app. They aren't actually processes, it's just helpful to conceptually think of them this way as an abstraction.
So each time it matches its particular action it then most often will switchMap, mergeMap, concatMap, or exhaustMap into some side effect, like an ajax call. That inner observable chain is what you want to make cancellable. So you'd place your .takeUntil on it, at the appropriate place in the chain.
Summary
As mentioned, it's not clear what you intended to do and what the issue is, without a more complete example like a jsbin. But strictly based on the code provided, this is my guesstimate:
const someRequestEpic = action$ => {
return action$.ofType(actions.requestType)
.debounceTime(250)
.do(() => console.log('handled epic ' + actions.requestType))
.switchMap((action) =>
Observable.ajax(ajaxPost(url(action.value), body ? body(action.value) : action.form))
.takeUntil(action$.ofType(masterCancelAction))
.mergeMap(payload => {
return Observable.of(
actions.success(payload),
/* some other actions */
...etc
);
})
.catch(payload => Observable.of(
actions.failure(payload)
))
);
};
Check out the Cancellation page in the redux-observable docs.
If this is a bit confusing, I'd recommend digging a bit deeper into what Observables are and what an "operator" is and does so that it doesn't feel magical and where you should place an operator makes more sense.
Ben's post on Learning Observable by Building Observable is a good start.
I'm working on something that is recording data coming from a queue. It was easy enough to process the queue into an Observable so that I can have multiple endpoints in my code receiving the information in the queue.
Furthermore, I can be sure that the information arrives in order. That bit works nicely as well since the Observables ensure that. But, one tricky bit is that I don't want the Observer to be notified of the next thing until it has completed processing the previous thing. But the processing done by the Observer is asynchronous.
As a more concrete example that is probably simple enough to follow. Imagine my queue contains URLs. I'm exposing those as an Observable in my code. The I subscribe an Observer whose job is to fetch the URLs and write the content to disk (this is a contrived example, so don't take issue with these specifics). The important point is that fetching and saving are async. My problem is that I don't want the observer to be given the "next" URL from the Observable until they have completed the previous processing.
But the call to next on the Observer interface returns void. So there is no way for the Observer to communicate back to me that has actually completed the async task.
Any suggestions? I suspect there is probably some kind of operator that could be coded up that would basically withhold future values (queue them up in memory?) until it somehow knew the Observer was ready for it. But I was hoping something like that already existed following some established pattern.
similar use case i ran into before
window.document.onkeydown=(e)=>{
return false
}
let count=0;
let asyncTask=(name,time)=>{
time=time || 2000
return Rx.Observable.create(function(obs) {
setTimeout(function() {
count++
obs.next('task:'+name+count);
console.log('Task:',count ,' ', time, 'task complete')
obs.complete();
}, time);
});
}
let subject=new Rx.Subject()
let queueExec$=new Rx.Subject()
Rx.Observable.fromEvent(btnA, 'click').subscribe(()=>{
queueExec$.next(asyncTask('A',4000))
})
Rx.Observable.fromEvent(btnB, 'click').subscribe(()=>{
queueExec$.next(asyncTask('B',4000))
})
Rx.Observable.fromEvent(btnC, 'click').subscribe(()=>{
queueExec$.next(asyncTask('C',4000))
})
queueExec$.concatMap(value=>value)
.subscribe(function(data) {
console.log('onNext', data);
},
function(error) {
console.log('onError', error);
},function(){
console.log('completed')
});
What you describe sounds like "backpressure". You can read about it in RxJS 4 documentation https://github.com/Reactive-Extensions/RxJS/blob/master/doc/gettingstarted/backpressure.md. However this is mentioning operators that don't exist in RxJS 5. For example have a look at "Controlled Observables" that should refer to what you need.
I think you could achieve the same with concatMap and an instance of Subject:
const asyncOperationEnd = new Subject();
source.concatMap(val => asyncOperationEnd
.mapTo(void 0)
.startWith(val)
.take(2) // that's `val` and the `void 0` that ends this inner Observable
)
.filter(Boolean) // Always ignore `void 0`
.subscribe(val => {
// do some async operation...
// call `asyncOperationEnd.next()` and let `concatMap` process another value
});
Fro your description it actually seems like the "observer" you're mentioning works like Subject so it would make maybe more sense to make a custom Subject class that you could use in any Observable chain.
Isn't this just concatMap?
// Requests are coming in a stream, with small intervals or without any.
const requests=Rx.Observable.of(2,1,16,8,16)
.concatMap(v=>Rx.Observable.timer(1000).mapTo(v));
// Fetch, it takes some time.
function fetch(query){
return Rx.Observable.timer(100*query)
.mapTo('!'+query).startWith('?'+query);
}
requests.concatMap(q=>fetch(q));
https://rxviz.com/v/Mog1rmGJ
If you want to allow multiple fetches simultaneously, use mergeMap with concurrency parameter.
I have at least two buttons that I want to dynamically listen for clicks on. listeningArray$ will emit an array (ar) of button #'s that I need to be listening to. When somebody clicks on one of these buttons I'm listening to, I need to console log that the button that was clicked and also log the value from a time interval.
If ar goes from [1,2] to [1], we need to stop listening to clicks on button #2. So the DOM click event needs to be removed for 2 and that should trigger the .finally() operator. But for 1, we should remain subscribed and the code inside the .finally() should not run, since nothing is being unsubscribed.
const obj$ = {};
Rx.Observable.combineLatest(
Rx.Observable.interval(2000),
listeningArray$ // Will randomly emit either [1] or [1,2]
)
.switchMap(([x, ar]) => {
const observables = [];
ar.forEach(n => {
let nEl = document.getElementById('el'+n);
obj$[n] = obj$[n] || Rx.Observable.fromEvent(nEl, 'click')
.map(()=>{
console.log(' el' + n);
})
.finally(() => {
console.log(' FINALLY_' + n);
});
observables.push(obj$[n]);
})
return Rx.Observable.combineLatest(...observables);
})
.subscribe()
But what's happening is every time the interval emits a value, the DOM events ALL get removed and then immediately get added on again, and the code inside the .finally operator runs for 1 and 2.
This is really frustrating me. What am I missing?
It's a bit of a complex situation, so I created this: https://jsfiddle.net/mfp22/xtca98vx/7/
I was actually really close, but I misunderstood the point of switchMap.
switchMap is designed to unsubscribe from the observable it returns whenever a new value is emitted from above. This is why it can be used to cancel old pending Http requests when a new request needs to be made instead.
The problem I was having is to be expected. switchMap will unsubscribe from the previously returned observable before subscribing to the current one. This was unacceptable, as I explained in the question. The reason this was unacceptable was that in my actual project, the fromEvent observables were listening to Firebase child_added events, so when these cold observables went from having no subscribers to having 1 subscriber, Firebase would subsequently fire the event for every child already existing, as well as for future ones added.
I played with mergeMap for a while, but it was really difficult and buggy to manually have to unsubscribe from previously returned observables.
So I added a subscriber for the inner observables while switchMap was doing its process of unsubscribe from old => subscribe to new so that there would always be a subscriber. I used takeUntil(Observable.timer(0)) to make sure the subscribers didn't build up and cause a memory leak.
There may be a better solution, but this was the best one I found.
const obj$ = {};
Rx.Observable.combineLatest(
Rx.Observable.interval(2000),
listeningArray$ // Will randomly emit either [1] or [1,2]
)
.switchMap(([x, ar]) => {
const observables = [];
ar.forEach(n => {
let nEl = document.getElementById('el'+n);
obj$[n] = obj$[n] || Rx.Observable.fromEvent(nEl, 'click')
.map(()=>{
console.log(' el' + n);
})
.finally(() => {
console.log(' FINALLY_' + n);
})
.share();
obj$[n].takeUntil(Rx.Observable.timer(0))
.subscribe();
observables.push(obj$[n]);
})
return Rx.Observable.combineLatest(...observables);
})
.subscribe()
I also had to add the .share() method. I was going to need it anyway. I'm using this pattern to let some Angular components declare what data they need, ignoring what other components might want, to achieve a better separation of concerns. So multiple components can subscribe to the same Firebase observables, but the .share() operator ensures that each message from Firebase is only handled once (I'm dispatching actions to a Redux store for each one).
Working solution: https://jsfiddle.net/mfp22/xtca98vx/8/
State in FRP is immutable. Thus when you switchMap to the second emission the previous observable combineLatest containing [1,2] will get unsubscribed and the finally operator invoked. Before subscribing to the next containing only [1]
If you only want to unsubscribe from one button you can store state in the DOM (add atr to button) and use filter to ignore button.
Or you can add a TakeWhile() to every button dictating when it should be unsubscribed so it can invoke it's own finally()