Filtered send queue in rxjs - rxjs

So I'm relatively inexperienced with rxjs so if this is something that would be a pain or really awkward to do, please tell me and I'll go a different route. So in this particular use case, I was to queue up updates to send to the server, but if there's an update "in flight" I want to only keep the latest item which will be sent when the current in flight request completes.
I am kind of at a loss of where to start honestly. It seems like this would be either a buffer type operator and/or a concat map.
Here's what I would expect to happen:
const updateQueue$ = new Subject<ISettings>()
function sendToServer (settings: ISettings): Observable {...}
...
// we should send this immediately because there's nothing in-flight
updateQueue$.next({ volume: 25 });
updateQueue$.next({ volume: 30 });
updateQueue$.next({ volume: 50 });
updateQueue$.next({ volume: 65 });
// lets assume that our our original update just completed
// I would now expect a new request to go out with `{ volume: 65 }` and the previous two to be ignored.

I think you can achieve what you want with this:
const allowNext$ = new Subject<boolean>()
const updateQueue$ = new Subject<ISettings>()
function sendToServer (settings: ISettings): Observable { ... }
updateQueue$
.pipe(
// Pass along flag to mark the first emitted value
map((value, index) => {
const isFirstValue = index === 0
return { value, isFirstValue }
}),
// Allow the first value through immediately
// Debounce the rest until subject emits
debounce(({ isFirstValue }) => isFirstValue ? of(true) : allowNext$),
// Send network request
switchMap(({ value }) => sendToServer(value)),
// Push to subject to allow next debounced value through
tap(() => allowNext$.next(true))
)
.subscribe(response => {
...
})

This is a pretty interesting question.
If you did not have the requirement of issuing the last in the queue, but simply ignoring all requests of update until the one on the fly completes, than you would simply have to use exhaustMap operator.
But the fact that you want to ignore all BUT the last request for update makes the potential solution a bit more complex.
If I understand the problem well, I would proceed as follows.
First of all I would define 2 Subjects, one that emits the values for the update operation (i.e. the one you have already defined) and one dedicated to emit only the last one in the queue if there is one.
The code would look like this
let lastUpdate: ISettings;
const _updateQueue$ = new Subject<ISettings>();
const updateQueue$ = _updateQueue$
.asObservable()
.pipe(tap(settings => (lastUpdate = settings)));
const _lastUpdate$ = new Subject<ISettings>();
const lastUpdate$ = _lastUpdate$.asObservable().pipe(
tap(() => (lastUpdate = null)),
delay(0)
);
Then I would merge the 2 Observables to obtain the stream you are looking for, like this
merge(updateQueue$, lastUpdate$)
.pipe(
exhaustMap(settings => sendToServer(settings))
)
.subscribe({
next: res => {
// do something with the response
if (lastUpdate) {
// emit only if there is a new "last one" in the queue
_lastUpdate$.next(lastUpdate);
}
},
});
You may notice that the variable lastUpdate is used to control that the last update in the queue is used only once.

Related

How do I append to an observable inside the observable itself

My situation is as follows: I am performing sequential HTTP requests, where one HTTP request depends on the response of the previous. I would like to combine the response data of all these HTTP requests into one observable. I have implemented this before using an async generator. The code for this was relatively simple:
async function* AsyncGeneratorVersion() {
let moreItems = true; // whether there is a next page
let lastAssetId: string | undefined = undefined; // used for pagination
while (moreItems) {
// fetch current batch (this performs the HTTP request)
const batch = await this.getBatch(/* arguments */, lastAssetId);
moreItems = batch.more_items;
lastAssetId = batch.last_assetid;
yield* batch.getSteamItemsWithDescription();
}
}
I am trying to move away from async generators, and towards RxJs Observables. My best (and working) attempt is as follows:
const observerVersion = new Observable<SteamItem>((subscriber) => {
(async () => {
let moreItems = true;
let lastAssetId: string | undefined = undefined;
while (moreItems) {
// fetch current batch (this performs the HTTP request)
const batch = await this.getBatch(/* arguments */, lastAssetId);
moreItems = batch.more_items;
lastAssetId = batch.last_assetid;
const items = batch.getSteamItemsWithDescription();
for (const item of items) subscriber.next(item);
}
subscriber.complete();
})();
});
Now, I believe that there must be some way of improving this Observer variant - this code does not seem very reactive to me. I have tried several things using pipe, however unfortunately these were all unsuccessful.
I found concatMap to come close to a solution. This allowed me to concat the next HTTP request as an observable (done with the this.getBatch method), however I could not find a good way to also not abandon the response of the current HTTP request.
How can this be achieved? In short I believe this problem could be described as appending data to an observable inside the observable itself. (But perhaps this is not a good way of handling this situation)
TLDR;
Here's a working StackBlitz demo.
Explanation
Here would be my approach:
// Faking an actual request
const makeReq = (prevArg, response) =>
new Promise((r) => {
console.log(`Running promise with the prev arg as: ${prevArg}!`);
setTimeout(r, 1000, { prevArg, response });
});
// Preparing the sequential requests.
const args = [1, 2, 3, 4, 5];
from(args)
.pipe(
// Running the reuqests sequantially.
mergeScan(
(acc, crtVal) => {
// `acc?.response` will refer to the previous response
// and we're using it for the next request.
return makeReq(acc?.response, crtVal);
},
// The seed(works the same as `reduce`).
null,
// Making sure that only one request is run at a time.
1
),
// Combining all the responses into one object
// and emitting it after all the requests are done.
reduce((acc, val, idx) => ({ ...acc, [`request${idx + 1}`]: val }), {})
)
.subscribe(console.warn);
Firstly, from(array) will emit each item from the array, synchronously and one by one.
Then, there is mergeScan. It is exactly the result of combining scan and merge. With scan, we can accumulate values(in this case we're using it to have access to the response of the previous request) and what merge does is to allow us to use observables.
To make things a bit easier to understand, think of the Array.prototype.reduce function. It looks something like this:
[].reduce((acc, value) => { return { ...acc }}, /* Seed value */{});
What merge does in mergeScan is to allow us to use the accumulator something like (acc, value) => new Observable(...) instead of return { ...acc }. The latter indicates a synchronous behavior, whereas with the former we can have asynchronous behavior.
Let's go a bit step by step:
when 1 is emitted, makeReq(undefined, 1) will be invoked
after the first makeReq(from above) resolves, makeReq(1, 2) will be invoked
after makeReq(1, 2) resolves, makeReq(2, 3) will be invoked and so on...
Somebody I consulted regarding this matter came up with this solution, I think it's quite elegant:
defer(() => this.getBatch(options)).pipe(
expand(({ more_items, last_assetid }) =>
more_items
? this.getBatch({ ...options, startAssetId: last_assetid })
: EMPTY,
),
concatMap((batch) => batch.getSteamItemsWithDescription()),
);
From my understanding the use of expand here is very similar to the use of mergeScan in #Andrei's answer

rxjs: why the stream emit twice when another stream use take(1)

When I use take(1), it will console.log twice 1, like below code:
const a$ = new BehaviorSubject(1).pipe(publishReplay(1), refCount());
a$.pipe(take(1)).subscribe();
a$.subscribe((v) => console.log(v)); // emit twice (1 1)
But when I remove take(1) or remove publishReplay(1), refCount(), it follow my expected (only one 1 console.log).
const a$ = new BehaviorSubject(1).pipe(publishReplay(1), refCount());
a$.subscribe();
a$.subscribe((v) => console.log(v)); // emit 1
// or
const a$ = new BehaviorSubject(1);
a$.pipe(take(1)).subscribe();
a$.subscribe((v) => console.log(v)); // emit 1
Why?
Version: rxjs 6.5.2
Let's first have a look at how publishReplay is defined:
const subject = new ReplaySubject<T>(bufferSize, windowTime, scheduler);
return (source: Observable<T>) => multicast(() => subject, selector!)(source) as ConnectableObservable<R>;
multicast() will return a ConnectableObservable, which is an observable that exposes the connect method. Used in conjunction with refCount, the source will be subscribed when the first subscriber registers and will automatically unsubscribe from the source when there are no more active subscribers. The multicasting behavior is achieved by placing a Subject(or any kind of subject) between the data consumers and the data producer.
() => subject implies that the same subject instance will be used every time the source will be subscribed, which is an important aspect as to why you're getting that behavior.
const src$ = (new BehaviorSubject(1)).pipe(
publishReplay(1), refCount() // 1 1
);
src$.pipe(take(1)).subscribe()
src$.subscribe(console.log)
Let's see what would be the flow of the above snippet:
src$.pipe(take(1)).subscribe()
Since it's the first subscriber, the source(the BehaviorSubject) will be subscribed. When this happens, it will emit 1, which will have to go through the ReplaySubject in use. Then, the subject will pass along that value to its subscribers(e.g take(1)). But because you're using publishReplay(1)(1 indicates the bufferSize), that value will be cached by that subject.
src$.subscribe(console.log)
The way refCount works is that it first subscribes to the Subject in use, and then to the source:
const refCounter = new RefCountSubscriber(subscriber, connectable);
// Subscribe to the subject in use
const subscription = connectable.subscribe(refCounter);
if (!refCounter.closed) {
// Subscribe to the source
(<any> refCounter).connection = connectable.connect();
}
Incidentally, here's what happens on connectable.subscribe:
_subscribe(subscriber: Subscriber<T>) {
return this.getSubject().subscribe(subscriber);
}
Since the subject is a ReplaySubject, it will send the cached values to its newly registered subscriber(hence the first 1). Then, because there were no subscribers before(due to take(1), which completes after the first emission), the source will be unsubscribed again, which should explain why you're getting the second 1.
If you'd like to get only one 1 value, you can achieve this by making sure that every time the source is subscribed, a different subject will be used:
const src$ = (new BehaviorSubject(1)).pipe(
shareReplay({ bufferSize:1, refCount: true }) // 1
);
src$.pipe(take(1)).subscribe()
src$.subscribe(console.log)
StackBlitz.

RX: Synchronizing promises

Let's say I have a rather typical use of rx that does requests every time some change event comes in (I write this in the .NET style, but I'm really thinking of Javascript):
myChanges
.Throttle(200)
.Select(async data => {
await someLongRunningWriteRequest(data);
})
If the request takes longer than 200ms, there's a chance a new request begins before the old one is done - potentially even that the new request is completed first.
How to synchronize this?
Note that this has nothing to do with multithreading, and that's the only thing I could find information about when googling for "rx synchronization" or something similar.
You could use concatMap operator which will start working on the next item only after previous was completed.
Here is an example where events$ appear with the interval of 200ms and then processed successively with a different duration:
const { Observable } = Rx;
const fakeWriteRequest = data => {
console.log('started working on: ', data);
return Observable.of(data).delay(Math.random() * 2000);
}
const events$ = Observable.interval(200);
events$.take(10)
.concatMap(i => fakeWriteRequest(i))
.subscribe(e => console.log(e));
<script src="https://unpkg.com/rxjs/bundles/Rx.min.js"></script>

redux-observable: Mapping to an action as soon as another was triggered at least once

I have an SPA that is loading some global/shared data (let's call this APP_LOAD_OK) and page-specific data (DASHBOARD_LOAD_OK) from the server. I want to show a loading animation until both APP_LOAD_OK and DASHBOARD_LOAD_OK are dispatched.
Now I have a problem with expressing this in RxJS. What I need is to trigger an action after each DASHBOARD_LOAD_OK, as long as there had been at least one APP_LOAD_OK. Something like this:
action$
.ofType(DASHBOARD_LOAD_OK)
.waitUntil(action$.ofType(APP_LOAD_OK).first())
.mapTo(...)
Does anybody know, how I can express it in valid RxJS?
You can use withLatestFrom since it will wait until both sources emit at least once before emitting. If you use the DASHBOARD_LOAD_OK as the primary source:
action$.ofType(DASHBOARD_LOAD_OK)
.withLatestFrom(action$.ofType(APP_LOAD_OK) /*Optionally*/.take(1))
.mapTo(/*...*/);
This allows you to keep emitting in the case that DASHBOARD_LOAD_OK fires more than once.
I wanted to avoid implementing a new operator, because I thought my RxJS knowledge was not good enough for that, but it turned out to be easier than I thought. I am keeping this open in case somebody has a nicer solution. Below you can find the code.
Observable.prototype.waitUntil = function(trigger) {
const source = this;
let buffer = [];
let completed = false;
return Observable.create(observer => {
trigger.subscribe(
undefined,
undefined,
() => {
buffer.forEach(data => observer.next(data));
buffer = undefined;
completed = true;
});
source.subscribe(
data => {
if (completed) {
observer.next(data);
} else {
buffer.push(data);
}
},
observer.error.bind(observer),
observer.complete.bind(observer)
);
});
};
If you want to receive every DASHBOARD_LOAD_OK after the first APP_LOAD_OK You can simply use skipUntil:
action$ .ofType(DASHBOARD_LOAD_OK)
.skipUntil(action$.ofType(APP_LOAD_OK).Take(1))
.mapTo(...)
This would only start emitting DASHBOARD_LOAD_OK actions after the first APP_LOAD_OK, all actions before are ignored.

RxJS 5 Timed Cache

I am trying to get time expiry cache to work for an observable that abstracts a "request-response", using postMessage and message events on the window.
The remote window expects a message getItemList and replies to it with a message of type {type: 'itemList', data: []}.
I would like to model the itemList$ observable in such a way that it caches the last result for 3 seconds, so that no new requests are made during that time, however, I cannot think of a way to achieve that in an elegant (read, one observable – no subjects) and succint manner.
Here is the example in code:
const remote = someIframe.contentWindow;
const getPayload = message => message.data;
const ofType = type => message => message.type === type;
// all messages coming in from the remote iframe
const messages$ = Observable.fromEvent(window, 'message')
.map(getPayload)
.map(JSON.parse);
// the observable of (cached) items
const itemList$ = Observable.defer(() => {
console.log('sending request');
// sending a request here, should happen once every 3 seconds at most
remote.postMessage('getItemList');
// listening to remote messages with the type `itemList`
return messages$
.filter(ofType('itemList'))
.map(getPayload);
})
.cache(1, 3000);
/**
* Always returns a promise of the list of items
* #returns {Promise<T>}
*/
function getItemList() {
return itemList$
.first()
.toPromise();
}
// poll every second
setInterval(() => {
getItemList()
.then(response => console.log('got response', response));
}, 1000);
I am aware of the (very similar) question, but I am wondering if anyone can come up with a solution without explicit subjects.
Thank you in advance!
I believe you are looking for the rxjs operator throttle:
Documentation on rxjs github repo
Returns an Observable that emits only the first item emitted by the
source Observable during sequential time windows of a specified
duration.
Basically, if you would like to wait until the inputs have quieted for a certain period of time before taking action, you want to debounce.
If you do not want to wait at all, but do not wish to make more than 1 query within a specific amount of time, you will want to throttle. From your use case, I think you want to throttle

Resources