I have a question on RxJS throttle.
When I set up leading: true, trailing: false:
const result = interval(1000).pipe(
throttle(() => interval(2000), { leading: true, trailing: false })
);
result.subscribe((x) => console.log(x));
I will get 2, 4, 6, 8 and it is correct.
When I set up leading: false, trailing: true:
const result = interval(1000).pipe(
throttle(() => interval(2000), { leading: false, trailing: true })
);
result.subscribe((x) => console.log(x));
I will get 0, 3, 6 and it is correct.
When I set up for leading: true, trailing: true:
const result = interval(1000).pipe(
throttle(() => interval(2000), { leading: true, trailing: true })
);
result.subscribe((x) => console.log(x));
I thought I will get above two scenarios result combined. But I will get 0, 2, 4, 6, which I do not understand.
You have to check out the source to see why it behaves this way (throttle.ts).
The leading config parameter is only referenced once, in the onNext callback function created when the source is subscribe to. That function it checks if the operator is waiting on an observable based on durationSelector (called throttled), and if it is not, will call send if leading is true.
// from onNext callback
!(throttled && !throttled.closed) && (leading ? send() : startThrottle(value));
The send function emits and calls startThrottle, while startThrottle creates the throttled observable.
// from send
subscriber.next(value);
!isComplete && startThrottle(value); // creates throttled.
So here's the important part, after throttled completes and trailing is true, send is called just like the onNext callback does when leading is true. What this effectively does is create a loop that as soon as throttled is done a new throttled is created.
// from endThrottling (which is throttle's onNext callback)
throttled = null; // no more throttled
if (trailing) {
send(); // going to create throttled again.
The only way to avoid this loop is if the source emits after the durationSelector. If there is no value ready to emit, then send won't call startValue, so the next call to send will have to come from the original onNext callback.
Related
I am trying to achieve the following with Rxjs: given an array of job ids, for every id in the array, poll an endpoint that returns the status of the job. The status can be either "RUNNING", or "FINISHED". The code should poll jobs one after the other, and continue the polling until the jobs are in the "RUNNING" status. As soon as a job reaches the "FINISHED" status, it should be passed downstream, and excluded from further polling.
Below is a minimal toy case that demonstrates the problem.
const {
from,
of,
interval,
mergeMap,
filter,
take,
tap,
delay
} = rxjs;
const { range } = _;
const doRequest = (input) => {
const status = Math.random() < 0.15 ? 'FINISHED' : 'RUNNING';
return of({ status, value: input })
.pipe(delay(500));
};
const number$ = from(range(1, 10));
const poll = (number) => interval(5000).pipe(
mergeMap(() => {
return doRequest(number)
}),
tap(console.log),
filter(( {status} ) => status === 'FINISHED'),
take(1)
);
const printout$ = number$.pipe(
mergeMap((number) => {
return poll(number)
})
);
printout$.subscribe(console.log);
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/7.5.5/rxjs.umd.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.21/lodash.min.js"></script>
It does most of what I described; but it polls all endpoints simultaneously rather than one after another. Here, roughly, is the pattern I would like to achieve:
starting with ids: [1, 2, 3]
polling: await request 1 then await request 2 then await request 3
then wait for n seconds; then repeat
after job 2 is finished, send request 1, then send request 3, then wait, then repeat
after job 3 is finished, send request 1, then wait, repeat
after job 1 is finished, complete the stream
I feel that in order to achieve the sending of the requests in sequence, they should be concatMaped; but in the snippet above that's not possible because of the interval that would prevent each polling stream from terminating.
Could you please advise how to modify my code to achieve what I am describing?
If I understand the problem right, I would proceed like this.
First of all I would create a poll function that returns an Observable which notifies after a round of pollings, and it emits an array of all numbers for which the call to doRequest returns 'RUNNING'. Such a function would look something like this
const poll = (numbers: number[]) => {
return from(numbers).pipe(
concatMap((n) =>
doRequest(n).pipe(
filter((resp) => resp.status === 'RUNNING'),
map((resp) => resp.value)
)
),
toArray()
);
};
Then what you need to do is to recursively iterate a call the poll function until the array emitted by the Observable returned by poll is empty.
Recursion in rxjs is obtained typically with the expand operator, and this is the operator which we are going to use also in this case, like this
poll(numbers)
.pipe(
expand((numbers) =>
numbers.length === 0
? EMPTY
: timer(2000).pipe(concatMap(() => poll(numbers)))
)
)
.subscribe(console.log);
A complete example can be seen in this stackblitz.
UPDATE
If the objective is to notify the job ids which have finished with a polling logic, the structure of the solution remains the same (a poll function and recursivity via expand) but the details are different.
The poll function makes sure we emit all the responses of a polling round and it looks like this:
const poll = (
numbers: number[]
) => {
console.log(`Polling ${numbers}`);
return from(numbers).pipe(
concatMap((n) => doRequest(n)),
toArray()
);
};
The recursion logic makes sure that all jobs that are still with "RUNNING" status are polled again but then we filter only the jobs which are FINISHED and passed them downstream. In other words the logic looks like this
poll(start)
.pipe(
expand((responses) => {
const numbers = responses.filter(r => r.status === 'RUNNING').map(r => r.value)
return numbers.length === 0
? EMPTY
: timer(2000).pipe(concatMap(() => poll(numbers)));
}),
map(responses => responses.filter(r => r.status === 'FINISHED')),
filter(finished => finished.length > 0)
)
.subscribe({
next: responses => console.log(`Job finished ${responses.map(r => r.value)}`),
complete: () => {console.log('All processed')}
});
A working example can be seen in this stackblitz.
Updated: Original answer was not on the right track.
What we want to achieve is that on each go around of the interval we poll all the outstanding jobs in order. We yield up any completed jobs to the output observable and we also omit those completed jobs from subsequent polls.
We can do that by using a Subject instead of a static observable of the job IDs. We start our poll interval and we use withLatestFrom to include the latest list of job IDs. We can then add a tap into the output observable when we get a finished job and update the Subject to omit that job.
To end the poller interval we can create an observable that fires when the array of outstanding jobs is empty and use takeUntil with that.
const number$ = new Subject();
const noMoreNumber$ = number$.pipe(skipWhile((numbers) => numbers.length > 0));
const printout$ = interval(5000).pipe(
withLatestFrom(number$),
switchMap(([_, numbers]) => {
return numbers.map((number) => defer(() => doRequest(number)));
}),
concatAll(),
//tap(console.log),
filter(({ status }) => status === 'FINISHED'),
withLatestFrom(number$),
tap(([{ value }, numbers]) =>
number$.next(numbers.filter((num) => num != value))
),
map(([item]) => item),
takeUntil(noMoreNumber$)
);
printout$.subscribe({
next: console.log,
error: console.error,
complete: () => console.log('COMPLETE'),
});
number$.next([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
The other tweak I would make is to use switchMap instead of mergeMap inside the poller itself. If you use that in combination with fromFetch for performing your HTTP calls then, if there is some long-running HTTP call which gets stuck, on the next poll the previous call will be cancelled before it makes the next HTTP call because switchMap disposes of the previous observable before subscribing to the new one.
Here's a working example:
https://stackblitz.com/edit/js-gxrrb3?devToolsHeight=33&file=index.js
Generates console output looking like this...
TRY this
import { delay, EMPTY, from, of, range } from 'rxjs';
import { concatMap, filter, mergeMap, tap, toArray } from 'rxjs/operators';
const number$ = from(range(1, 3));
const doRequest = (input) => {
const status = Math.random() < 0.15 ? 'FINISHED' : 'RUNNING';
return of({ status, value: input }).pipe(delay(1000));
};
const poll = (jobs: object[]) => {
return from(jobs).pipe(
filter((job) => job['status'] !== 'FINISHED'),
concatMap((job) => doRequest(job['value'])),
tap((job) => {
console.log('polling with................', job);
}),
toArray(),
tap((result) => {
console.log('curent jobs................', JSON.stringify(result));
}),
mergeMap((result) =>
result.length > 0 ? poll(result) : of('All job completed!')
)
);
};
const initiateJob = number$.pipe(
mergeMap((id) => doRequest(id)),
toArray(),
tap((jobs) => {
console.log('initialJobs: ', JSON.stringify(jobs));
}),
concatMap(poll)
);
initiateJob.subscribe({
next: console.log,
error: console.log,
complete: () => console.log('COMPLETED'),
});
I have the following code:
this.workingStore$.pipe(
filter((workingStores) => !!workingStores[docID]),
concatMap((workingStores) => {
console.log(
'returning from concatMap',
workingStores[docID].getInitialDataSet(),
);
return workingStores[docID].getInitialDataSet();
}),
filter((isSet) => {
console.log('looking for set', isSet);
return isSet;
}),
),
workingStores[docID].getInitialDataSet() returns an Observable. Because the pipes that set it to true complete, the BehaviorSubject gets isStopped: true internally. Once it becomes true, the filter no longer fires for isSet.
Shouldn't it just know to return the final value? It seems that's not the case so how would I wrote this so the last filter always runs? If I do the following, it works, but is awfully code smelly
concatMap((workingStores) => {
if (
workingStores[docID].getInitialDataSet().getValue() === true
) {
return of(true);
}
return workingStores[docID].getInitialDataSet();
}),
I am aware ReplaySubject will give values, even after stopped, but I don't want to emit old values to any subscriber.
ReplaySubject has a constructor that accepts the number of latest events to replay. If you provide 1 it will act similarly to your BehaviorSubject.
I'm trying to build a reusable piece of code for multi files upload.
I do not want to care about the HTTP layer implementation, I want to purely focus on the stream logic.
I've built the following function to mock the HTTP layer:
let fakeUploadCounter = 0;
const fakeUpload = () => {
const _fakeUploadCounter = ++fakeUploadCounter;
return from(
Array.from({ length: 100 })
.fill(null)
.map((_, i) => i)
).pipe(
mergeMap(x =>
of(x).pipe(
delay(x * 100),
switchMap(x =>
_fakeUploadCounter % 3 === 0 && x === 25
? throwError("Error happened!")
: of(x)
)
)
)
);
};
This function simulates the progress of the upload and the progress will fail at 25% of the upload every 3 files.
With this out of the way, let's focus on the important bit: The main stream.
Here's what I want to achieve:
Only use streams, no imperative programming, no tap to push a temporary result in a subject. I could build this. But I'm looking for an elegant solution
While some files are being uploaded, I want to be able to add more files to the upload queue
As a browser can deal with only 6 HTTP calls at the same time, I do not want to take too much of that amount and we should be able to upload only 3 files at the same time. As soon as one finishes or is stopped or throws, then another file should start
When a file upload throws, we should keep that file in the list of file and still display the progress. It won't increase anymore but at least the user gets to see where it failed. When that's the case, we should see some text on that row indicating that there was an error and a retry button should let us give another go at the upload or a discard button will let us remove it completely
Here's a visual explanation:
So far, here's the code I've got:
export class AppComponent {
public file$$: Subject<File> = new Subject();
public retryFile$$: Subject<File> = new Subject();
public stopFile$$: Subject<File> = new Subject();
public files$ = this.file$$.pipe(
mergeMap(file =>
this.retryFile$$.pipe(
filter(retryFile => retryFile === file),
startWith(null),
map(() =>
fakeUpload().pipe(
map(progress => ({ progress })),
takeUntil(
this.stopFile$$.pipe(filter(stopFile => stopFile === file))
),
catchError(() => of({ error: true })),
scan(
(acc, curr: { progress: number } | { error: true }) => ({
...acc,
...curr
}),
{
file,
progress: 0,
error: false
}
)
)
)
)
),
mergeAll(3), // 3 upload in parallel maximum
scan(
(acc, curr) => ({
...acc,
// todo we can't use the File reference directly here
// but we shouldn't use the file name either
// instead we should generate a unique ID for each upload
[curr.file.name]: curr
}),
{}
),
map(fileEntities => Object.values(fileEntities))
);
public addFile() {
this.file$$.next(new File([], `test-file-${filesCount}`));
filesCount++;
}
}
Here's the code in stackblitz that you can fork: https://stackblitz.com/edit/rxjs-upload-multiple-files-v2?file=src/app/app.component.ts
I'm pretty close! If you open the live demo in stackblitz on the right and click on the "Add file" button, you'll see that you can add many files and they'll all get uploaded. The 3rd one will fail gracefully.
Now what is not working how I'd like:
If you click quickly more than 3 times on the "add file" button, only 3 files will appear in the queue. I'd like to have all of them but only 3 should be uploading at the same time. Yet, all the files to be uploaded should be displayed in the view, just waiting to start
The stop button should remove any upload. Whether it's uploading or failed
Thanks for any help
Number 1:
If you click quickly more than 3 times on the "add file" button, only 3 files will appear in the queue. I'd like to have all of them but only 3 should be uploading at the same time. Yet, all the files to be uploaded should be displayed in the view, just waiting to start
First of all, this is a cool problem because as far as I could see, you can't simply compose the existing operators (Without getting stupid with partition). You need a custom operator that splits your stream. If you don't want to subscribe to your source twice, you should share before splitting.
There's quite a lot of work left to implement your solution the way you'd like. BUT, in terms of getting your stream to show all files regardless of whether they're currently loading, there's really just one piece missing.
You want to split your stream. One stream should emit default
{
file,
progress: 0,
error: false
}`
files right away and the second stream should emit updates to those files. The second stream will have mergeAll(3), but the first doesn't need this limitation as it's not making a network request. You merge these two-streams and either update or add new entries into your output as you see fit.
Here's an example of that at work. I made a dummy example to abstract away the implementation details a bit. I start out with an array of objects with this shape,
{
id: number,
message: "HeyThere" + id,
response: none
}
I make a fake httpRequest call that enriches an object to
{
id: number,
message: "HeyThere" + id,
response: "Hello"
}
The stream emits each time a new object is added or when an object is enriched. But the enriching stream is limited to max 3 httpRequest calls at once.
const httpRequest= () => {
return timer(4000).pipe(
map(_ => "Hello")
);
}
const arrayO = [];
arrayO.length = 10;
from(arrayO).pipe(
map((val, index) => ({
id: index,
message: "HeyThere" + index,
response: "None"
})),
share(),
s => merge(s, s.pipe(
map(ob => httpRequest().pipe(
map(val => ({...ob, response: val}))
)),
mergeAll(3)
)),
scan((acc, val: any) => {
acc.set(val.id, val);
return acc;
}, new Map<number, any>()),
debounceTime(250),
map(mapO => Array.from(mapO.values()))
).subscribe(console.log);
I added a debounce as I find it makes the output much easier to follow. Since I added all 10 un-enriched objects synchronously, it just spams 10 arrays to the output if I don't debounce. Also, since every fake HttpRequest takes exactly 4 seconds, I get three arrays spammed at the output every 4 seconds. Debounce stops the UI from stuttering or the console from getting spammed.
Number 2
The stop button should remove any upload. Whether it's uploading or failed
This is a can of worms because every canonical solution says you should make a state management system. That would be the easiest way to interact with files that are in Queue, Loading, Failed, and Loaded all in one uniform way.
It's pretty easy to implement a lightweight Redux-style state management system using RxJS (Just use scan to manage state and JSON objects representing events to transform state). The toughest part is managing your current httpRequests. You'd probably create a custom mergeAll() operator that takes in events, removes queued requests, and even cancels mid-flight requests if necessary.
Using a stopFile$$ works to cancel mid-flight requests but it'll fall apart if people want to stop a fileload that hasn't started yet (as per your first requirement, you want those vsible too). It's sort of brittle regardless because emiting on a suject never comes with the assurance that anybody is listening. Another reason that a redux-style management is the way to go.
This is a very interesting problem, here is my approach to it:
uploadFile$ = this.uploadFile.pipe(
multicast(new Subject<CustomFile>(), subject =>
merge(
subject.pipe(
mergeMap(
// `file.id` might be created with uuid() or something like that
(file, idx) =>
of({ status: FILE_STATUS.PENDING, ...file }).pipe(
observeOn(asyncScheduler),
takeUntil(subject)
)
)
),
subject.pipe(
mergeMap(
(file, idx) =>
fakeUpload(file).pipe(
map(progress => ({
...file,
progress,
status: FILE_STATUS.LOADING
})),
startWith({
name: file.name,
status: FILE_STATUS.LOADING,
id: file.id,
progress: 0
}),
catchError(() => of({ ...file, status: FILE_STATUS.FAILED })),
scan(
(acc, curr) => ({
...acc,
...curr
}),
{} as CustomFile
),
takeUntil(
this.stopFile.pipe(
tap(console.warn),
filter(f => f.id === file.id)
)
)
),
3
)
)
)
)
);
files$: Observable<CustomFile[]> = merge(
this.uploadFile$,
this.stopFile
).pipe(
tap(v =>
v.status === FILE_STATUS.REMOVED ? console.warn(v) : console.log(v)
),
scan((filesAcc, crtFile) => {
// if the file is being removed, we need to remove it from the list
if (crtFile.status === FILE_STATUS.REMOVED) {
const { [crtFile.id]: _, ...rest } = filesAcc;
return rest;
}
// simply return an updated copy of the object when the file has the status either
// * `pending`(the buffer's length is > 3)
// * `loading`(the file is being uploaded)
// * `failed`(an error occurred during the file upload, but we keep it in the list)
// * `retrying`(the `Retry` button has been pressed)
return {
...filesAcc,
[crtFile.id]: crtFile
};
}, Object.create(null)),
// Might want to replace this by making the `scan`'s seed return an object that implements a custom iterator
map(obj => Object.values(obj))
);
StackBlitz demo.
I think the biggest problem here was how to determine when the mergeMap's buffer is full, so that a pending item should be shown to the user. As you can see, I've solved this using the multicast's second parameter:
multicast(new Subject(), subject => ...)
multicast(new Subject), refCount(), without its second argument, it's the same as share(). But when you provide the second argument(a.k.a the selector), you can achieve some sort of local multicasting:
if (isFunction(selector)) {
return operate((source, subscriber) => {
// the first argument
const subject = subjectFactory();
/* .... */
selector(subject).subscribe(subscriber).add(source.subscribe(subject));
});
}
selector(subject).subscribe(subscriber) will subscribe to the observable(which can also be a Subject) returned from the selector. Then, with .add(source.subscribe(subject)), the source is subscribed to. In the selector, we've used merge(subject.pipe(...), subject.pipe(...)), each of which will gain access to what's being pushed into the stream. Because of add(source.subscribe(subject)), the source's value will be passed to the Subject instance, which has its subscribers.
So, the way I solved the aforementioned problem was to create a race between observables. The first contender is
// #1
subject.pipe(
mergeMap(
// `file.id` might be created with uuid() or something like that
(file, idx) =>
of({ status: FILE_STATUS.PENDING, ...file }).pipe(
observeOn(asyncScheduler),
takeUntil(subject)
)
)
),
and the second one is
// #2
subject.pipe(
mergeMap(
(file, dx) => fileUpload().pipe(
/* ... */
// emits synchronously - as soon as the inner subscriber is created
startWith(...)
)
)
)
So, as soon as the Subject(the subject variable in this case) receives the value from the source, it will send it to all of its subscribers - the 2 contenders. It all happens synchronously, which also means that the order matters. #1 will be the first subscriber to receive the value, and #2 will be second. The way the winner is selected is to see which one of the 2 subscribers emits first.
Notice that the first will pass along the value asynchronously(with the help of observeOn(asyncScheduler)) and the second one synchronously. The first one will emit first if the buffer is full, otherwise the second will emit.
I've ended up with the following:
export interface FileUpload {
file: File;
progress: number;
error: boolean;
toRemove: boolean;
}
export const uploadManager = () => {
const file$$: Subject<File> = new Subject();
const retryFile$$: Subject<File> = new Subject();
const stopFile$$: Subject<File> = new Subject();
const fileStartOrRetry$: Observable<File> = file$$.pipe(
mergeMap(file =>
retryFile$$.pipe(
filter(retryFile => retryFile === file),
startWith(file)
)
),
share()
);
const addFileToQueueAfterStartOrRetry$: Observable<
FileUpload
> = fileStartOrRetry$.pipe(
map(file => ({
file,
progress: 0,
error: false,
toRemove: false
}))
);
const markFileToBeRemovedAfterStop$: Observable<FileUpload> = stopFile$$.pipe(
map(file => ({
file,
progress: 0,
error: false,
toRemove: true
}))
);
const updateFileProgress$: Observable<FileUpload> = fileStartOrRetry$.pipe(
map(file =>
uploadMock().pipe(
map(progress => ({ progress })),
takeUntil(
stopFile$$.pipe(filter(stopFile => stopFile.name === file.name))
),
catchError(() => of({ error: true })),
scan(
(acc, curr: { progress: number } | { error: true }) => ({
...acc,
...curr
}),
{
file,
progress: 0,
error: false,
toRemove: false
}
)
)
),
// 3 upload in parallel maximum
mergeAll(3)
);
const files$: Observable<FileUpload[]> = merge(
addFileToQueueAfterStartOrRetry$,
updateFileProgress$,
markFileToBeRemovedAfterStop$
).pipe(
scan<FileUpload, { [key: string]: FileUpload }>((acc, curr) => {
if (curr.toRemove) {
const copy = { ...acc };
delete copy[curr.file.name];
return copy;
}
return {
...acc,
// todo we can't use the File reference directly here
// but we shouldn't use the file name either
// instead we should generate a unique ID for each upload
[curr.file.name]: curr
};
}, {}),
map(fileEntities => Object.values(fileEntities))
);
return {
files$,
file$$,
retryFile$$,
stopFile$$
};
};
It covers all the cases as demonstrated here: https://rxjs-upload-multiple-file-v3.stackblitz.io
The code is here: https://stackblitz.com/edit/rxjs-upload-multiple-file-v3?file=src/app/upload-manager.ts
It's based on Mrk Sef's suggestion. It clicked after he mentioned "You want to split your stream".
Hello,
first of all, thank you for reading this. 🙏
I want to handle scroll events stream and I want to react on scroll starts and ignore following burst of scroll events until stream considered inactive (time limit). So after delay I want repeat the same.
This is my solution so far:
import { fromEvent } from 'rxjs';
import { throttle, debounceTime } from 'rxjs/operators';
const stream = fromEvent(window, 'scroll');
const controllerStream = stream.pipe(debounceTime(500));
this.sub = stream
.pipe(
throttle(() => controllerStream, {
leading: true,
trailing: false,
})
)
.subscribe(() => {
// react on scroll-start events
});
Is there a better way?
I was considering operators like throttleTime, debounce, debounceTime... but I could not find the configuration matching my needs
Thank you 🙏👍
While this solution looks a bit involved, it achieves the behavior you describe, and can be encapsulated cleanly in a custom operator.
import { of, merge, NEVER } from 'rxjs';
import { share, exhaustMap, debounceTime, takeUntil } from 'rxjs/operators';
const firstAfterInactiveFor = (ms) => (source) => {
// Multicast the source since we need to subscribe to it twice.
const sharedSource = source.pipe(share());
return sharedSource.pipe(
// Ignore source until we finish the observable returned from exhaustMap's
// callback
exhaustMap((firstEvent) =>
// Create an observable that emits only the initial scroll event, but never
// completes (on its own)
merge(of(firstEvent), NEVER).pipe(
// Complete the never-ending observable once the source is dormant for
// the specified duration. Once this happens, the next source event
// will be allowed through and the process will repeat.
takeUntil(sharedSource.pipe(debounceTime(ms)))
)
)
);
};
// This achieves the desired behavior.
stream.pipe(firstAfterInactiveFor(500))
I have made a third version, encapsulating my solution into custom operator based on #backtick answer. Is there a problem with this solution? Memory leak or something? I am not sure whether inner controllerStream will destroy properly or at all.
const firstAfterInactiveFor = (ms) => (source) => {
const controllerStream = source.pipe(debounceTime(ms));
return source
.pipe(
throttle(() => controllerStream, {
leading: true,
trailing: false
})
)
};
// This achieves the desired behavior.
stream
.pipe(
firstAfterInactiveFor(500)
)
.subscribe(() => {
console.log("scroll-start");
});
Here is codepen with comparison of all three:
https://codepen.io/luckylooke/pen/zYvEoyd
EDIT:
better example with logs and unsubscribe button
https://codepen.io/luckylooke/pen/XWmqQBg
I have a stream that is by default an empty object. Over time this object gets its keys filled.
const RXSubject = new BehaviorSubject({});
RXSubject.pipe(
filter((frame): frame is InstDecodedFrame => frame.type === FrameType.INST),
scan<InstDecodedFrame, InstantDataDictionnary>(
(acc, frame) => ({ ...acc, ...frame.dataList }),
{},
),
);
Now I subscribe to the filter on some part of the app, but if I subscribe somewhere else and the last value has not been triggering the filter condition. My new observable just get nothing.
Is there any way that I get the latest "valid" value from the pipe in any of subscriber to the pipe ?
Thanks
You can use shareReplay(1) after filter() and subscribe to that observable:
const obs$ = RXSubject.pipe(
filter((frame): frame is InstDecodedFrame => frame.type === FrameType.INST),
scan<InstDecodedFrame, InstantDataDictionnary>(
(acc, frame) => ({ ...acc, ...frame.dataList }),
{},
),
shareReplay(1),
);
Then you'll subscribe to obs$ instead of RXSubject.