rxjs switch unwrapping observable - rxjs

I setup a subject and then put some methods on it. It seems to work as intended until it gets to .switch() which I thought would simply keep track of the last call. I get the error Property 'subscribe' does not exist on type 'ApiChange' It seems to convert it to type ApiChange from an observable. I don't understand this behavior. Should I be using a different operator?
Service:
private apiChange = new Subject<ApiChange>();
apiChange$ = this.apiChange.asObservable().distinctUntilChanged().debounceTime(1000).switch();
Component:
this.service.apiChange$.subscribe(change => {
this.service.method(change);
});

.debounceTime(1000) will already assure you will only get a maximum of one value emitted from your observable chain per second. All the values preceding the 1 second quiet time will already be discarded.
With a simple Subject (not a ReplaySubject), past values are not provided to subscribers anyway.
You probably just want to skip the .switch() and enjoy the chain without it.

Related

RxJS: Second observable's input argument depends on first observable's output

I have two observables
baseObs$ = serviceCall(); // this return Observable<Foo>
secondObs$ = serviceCall(args); // this return Observable<Bar>
args in this example is public variable defined somewhere else. It doesn't need to be though if that makes this easier.
baseObs I can call whenever I want, but secondObs I can only call after baseObs is succesfully called and handled (don't know the right words so example follows):
I have now something like
baseObs$.subscribe(x => {
const args = x.args; // just example. Point is, I need x to build args.
serviceCall(args).subscribe(y => {
console.log(y); // This is fine
});
});
This suits my needs but I got feedback that no subscribe should live inside another subscribe. How would you achieve same thing using baseObs$ and secondObs$ defined above?
PS. All is pseudo code but hopefully I didn't do too much typos. I think the idea should be clear though.
In the simple case that the first observable only emits once (like a typical HTTP request), any one of switchMap, mergeMap etc. will do:
serviceCall().pipe(
switchMap(x => serviceCall(x.args))
).subscribe(console.log);
If that assumption is not true, you're going to want to read their respective documentation to understand how behavior will change between them. In fact, I'd recommend reading up on them to know their differences even just in general, as it's very valuable information when dealing with reactive code.

how to convert a Flux<Object> list into a List<Object>

I have a Flux and I want to convert it to List. How can I do that?
Flux<Object> getInstances(String serviceId); // Current one
List<Object> getInstances(String serviceId); // Demanded one
Java 8 or reactive components have a prepared method to map or convert it to List ??
I should use .map()
final List<ServiceInstance> sis = convertedStringList.parallelStream()
.map( this.reactiveDiscoveryClient::getInstances )
// It should be converted to List<Object>
1. Make sure you want this
A fair warning before diving into anything else: Converting a Flux to a List/Stream makes the whole thing not reactive in the strict sense of the concept because you are leaving the push domain and trading it with a pull domain. You may or may not want this (usually you don't) depending on the use-case. Just wanted to leave the note.
2. Converting a Flux to a List
According to the Flux documentation, the collectList method will return a Mono<List<T>>. It will return immediately, but it's not the resulting list itself, but a lazy structure, the Mono, that promises the result will eventually be there when the sequence is completed.
According to the Mono documentation, the block method will return the contents of the Mono when it completes. Keep in mind that block may return null.
Combining both, you could use someFlux.collectList().block(). Provided that someFlux is a Flux<Object>, the result would be a List<Object>.
The block method won't return anything if the Flux is infinite. As an example, the following will return a list with two words:
Flux.fromArray(new String[]{"foo", "bar"}).collectList().block()
But the following will never return:
Flux.interval(Duration.ofMillis(1000)).collectList().block()
To prevent blocking indefinitely or for too long, you may pass a Duration argument to block, but that will timeout with an exception when the subscription does not complete on time.
3. Converting a Flux to a Stream
According to the Flux documentation, the toStream method converts a Flux<T> into a Stream<T>. This is more friendly to operators such as flatMap. Mind this simple example, for the sake of demonstration:
Stream.of("f")
.flatMap(letter ->
Flux.fromArray(new String[]{"foo", "bar"})
.filter(word -> word.startsWith(letter)).toStream())
.collect(Collectors.toList())
One could simply use .collectList().block().stream(), but not only it's less readable, but it could also result in NPE if block returned null. This approach does not finish for an infinite Flux as well, but because this is a stream of unknown size, you can still use some operations on it before it's complete, without blocking.

Can I use data loader without batching

What I am really interested in with data-loader is the per request caching. For example say my graphql query needs to call getUser("id1") 3x. I would like something to dedupe that call.
However it seems like with data-loader I need to pass in an array of keys into my batch function, and multiple requests will be batched into one api calls.
This has me making a few assumptions that i dont like:
1.) That each service I am calling has a batch api (some of the ones im dealing with do not).
2.) What if multiple calls get batched into 1 api call, and that call fails because 1 of the items was not found. Normally I could handle this by returning null for that field, and that could be a valid case. Now however my entire call may fail, if the batch API decides to throw an error since 1 item was not found.
Is there anyway to use dataloader with single-key requests.
Both assumptions are wrong because the implementation of the batch function is ultimately left up to you. As indicated in the documentation, the only requirements when writing your batch function are as follows:
A batch loading function accepts an Array of keys, and returns a Promise which resolves to an Array of values or Error instances.
So there's no need for the underlying data source to also accept an array of IDs. And there's no need for one or more failed calls to cause the whole function to throw since you can return either null or an error instance for any particular ID in the array you return. In fact, your batch function should never throw and instead should always return an array of one or more errors.
In other words, your implementation of the batch function might look something like:
async function batchFn (ids) {
const result = await Promise.all(ids.map(async (id) => {
try {
const foo = await getFooById(id)
return foo
} catch (e) {
// either return null or the error
}
}))
}
It's worth noting that it's also possible to set the maxBatchSize to 1 to effectively disable batching. However, this doesn't change the requirements for how your batch function is implemented -- it always needs to take an array of IDs and always needs to return an array of values/errors of the same length as the array of IDs.
Daniel's solution is perfectly fine and is in fact what I've used so far, after extracting it into a helper function.
But I just found another solution that does not require that much boilerplate code.
new DataLoader<string, string>(async ([key]) => [await getEntityById(key)], {batch: false});
When we set batch: false then we should always get a key-array of size one passed as argument. We can therefore simply destructure it and return a one-sized array with the data. Notice the brackets arround the return value! If you omit those, this could go horribly wrong e.g. for string values.

shareReplay vs ReplaySubject - only ReplaySubject caches latest value before subscription

I have an external hot source pushing values before observers can subscribe. Upon subscription, the late observers should receive the latest value and every value from that point on. For this, I used the following code (the relevant line is marked with '<<<', the s Subject is here just to be able to create the simplest sample possible, in reality the hot source works differently):
// irrelevant, just to send values
const s = new Subject();
// make the observable cache the last value
const o = s.pipe(shareReplay(1)); // <<<
// now, before subscription, values start coming in
s.next(1);
s.next(2);
s.next(3);
o.subscribe(n => console.warn('!!!', n));
This doesn't work (I expected it to print !!! 3 but nothing happens), but I found a way to make it work:
// irrelevant, just to send values
const s = new Subject();
const r = new ReplaySubject(1);
s.subscribe(r);
const o = r.asObservable();
s.next(1);
s.next(2);
s.next(3);
o.subscribe(n => console.warn('!!!', n));
i.e instead of using shareReplay(1), I create a ReplaySubject(1) and use it as a bridge. With this code, I do get the coveted !!! 3.
While I'm happy it works, I would like to understand why the first snippet doesn't. I always thought shareReplay is pretty much equivalent to the second way and actually kind of implemented this way. What am I missing?
When you use s.pipe(shareReplay(1)) you're just adding an operator to the chain (like changing the chain's prototype). But there's no subscription and shareReplay doesn't subscribe to its source when it itself doesn't have any observers. So it's not caching anything because there's no subscription to source Observable even when source is "hot".
However, when you use s.subscribe(r) you're regularly making a subscription to s so r starts receiving items and ReplaySubject will be caching them.

Angular 6 avoid callback hell

Coming from AngularJS I'm struggling trying to solve the next problem. I need a function that returns an object (lets call it A, but this object cannot be returned till all the requests that are contained in that function are resolved. The process should be like:
The object A is downloaded from a remote server
Using A, we do operations over another object (B)
B is downloaded from the server
B is patched using some attributes from A
Using A and the result of B we do operations over a third object, C
C is downloaded from the server
C is patched using some attributes from A and B
After B and C are processed, the function must return A
I'd like to understand how to do something like this using rxjs, but with Angular 6 most of the examples around the internet seem to be deprecated, and the tutorials out there are not really helping me. And I cannot modify the backend to make this a bit more elegant. Thanks a lot.
Consider the following Observables:
const sourceA = httpClient.get(/*...*/);
const sourceB = httpClient.get(/*...*/);
const sourceC = httpClient.get(/*...*/);
Where httpClient is Angular's HTTPClient.
The sequence of the operations you described may look as follows:
const A = sourceA.pipe(
switchMap(a => sourceB.pipe(
map(b => {
// do some operation using a and b.
// Return both a and b in an array, but you can
// also return them in an object if you wish.
return [a,b];
})
)),
switchMap(ab => sourceC.pipe(
map(c => {
// do some operations using a, b, and/or c.
return a;
})
))
);
Now you just need to subscribe to A:
A.subscribe(a => console.log(a));
You can read about RxJs operators here.
Well, first of all, it appears to me that this function-call, as described, would be somehow expected to block the calling process until all of the specified events have occurred – which of course is unreasonable in JavaScript.
Therefore, first of all, I believe that your function should require, as its perhaps-only parameter, a callback that will be invoked when everything has finally taken place.
Now – as to "how to handle steps 1, 2, and 3 elegantly" ... what immediately comes to mind is the notion of a finite-state machine (FSM) algorithm.
Let's say that your function-call causes a new "request" to be placed on some request-table queue, and, if necessary, a timer-request (set to go off in 1 millisecond) to service that queue. (This entry will contain, among other things, a reference to your callback.) Let's assume also that the request is given a random-string "nonce" that will serve to uniquely identify it: this will be passed to the various external requests and must be included in their corresponding replies.
The FSM idea is that the request will have a state, (attribute), such as: DOWNLOADING_FROM_B, B_DOWNLOADS_COMPLETE, DOWNLOADING_FROM_C, C_REQUESTS_COMPLETE, and so on. Such that each and every callback that will play a part in this fully-asynchronous process will (1) be able to locate a request-entry by its nonce, and then (2) unambiguously "know what to do next," and "what new-state (if any) to assign," based solely upon examination of the entry's state.
For instance, when the state reaches C_REQUESTS_COMPLETE, it would be time to invoke the callback that you originally provided, and to delete the request-table entry.
You can easily map-out all of the "state transitions" that might occur in an arbitrarily-complex scenario (what states can lead to what states, and what to do when they do), whether or not you actually create a data-structure to represent that so-called "state table," although sometimes it is even-more elegant(!) when you do. (Possibly-messy decision logic is simply pushed to a simple table-lookup.)
This is, of course, a classic algorithm that is applicable to – and, has been used in – "every programming language under the sun." (Lots of hardware devices use it, too.)

Resources