This question is for learning purposes, not to solve a particular problem (please move it to the appropriate section if necessary).
I'm learning about piping operators in the RxJS library. At this site here (https://rxjs.dev/guide/operators) it distinguishes between pipeable operators and creator operators.
It defines pipeable operators as follows:
A Pipeable Operator is a function that takes an Observable as its input and returns another Observable. It is a pure operation: the previous Observable stays unmodified.
And it defines creator operators as follows:
Creation Operators are the other kind of operator, which can be called as standalone functions to create a new Observable. For example: of(1, 2, 3) creates an observable that will emit 1, 2, and 3, one right after another.
But this leaves me wondering: is there such an operator as one that DOES modify the observable it gets as input and returns it as output? I haven't come across anything like that. Is there a reason such an operator doesn't exist? What kind of undesired behavior would result from such an operator?
You can see pipable operation as a series of function execution, in most of the time there's no need for modifying the upstream function. What we interest in is transforming data and add custom operation as we proceed down the stream
fn(fn2(fn3(...)))
if in any case you want to modify upstream behavior, the upstream observable has to be designed to allow such case, for instance use a function factory to let user add an middleware
e.g
const interceptor=()=>{...}
const getUpstreanFn=(middleware)=>(param)=>{ middleware()......}
const upstreamFn=getUpstreamFn(middleware)
Related
observable1.pipe(
withLatestFrom(observable2),
.. do something with both emitted observable values
)
Problem with withLatestFrom is that if observable2 hasn't emitted any event before the observable1 did then it's a dead code. How should I modify the code to ensure that do something with both emitted observable values will have have both observable values emitted at least once prior to the call? Maybe some wrapper around the forkJoin?
You could use the RxJS combineLatestWith if you're looking for a pipable operator instead of combineLatest function.
observable1.pipe(
combineLatestWith(observable2),
// ... do something with both emitted observable values
)
Looks like a use case for the combineLatest operator. This operator takes an array of observables and waits for every to emit at least once. Once all the observables have emitted, the operator emits for the first time. After that it emits every time any of the observables emits.
You can enforce a first emission with startWith(), you apply it to both observable 1 and 2 depends on your need.
observable1.pipe(
withLatestFrom(observable2.startWith(null)),
.. do something with both emitted observable values
)
NgRx provides a concatLatestFrom operator implementation you can reference. It wraps withLatestFrom to produce the behavior that you (and very many others) are expecting and is potentially more semantically clear than combineLatestWith. This also works on versions RxJS before 7:
source1$.pipe(
concatLatestFrom(source2$),
...
)
I use a pipeline operator ponyfill, which is just a utility function applyPipe such that applyPipe(x, a, b) is equivalent to b(a(x)) or x |> a |> b (in this example there are 2 functions, but actually it can be any number of functions). In fp-ts this function is called pipe.
In my case the function is implemented as
export const applyPipe = (
source,
...project
) => {
for (const el of project) {
source = el(source);
}
return source;
};
(you could also implement it with .reduce).
This function can be used to compose observable operators, so applyPipe(timer(500), delay(500)) is equivalent to timer(500).pipe(delay(500)). The question is, is there a performance penalty to using such function in place of the .pipe method?
Theoretically, I see no major issue or performance downgrade by doing so (other than adding an extra step by using your function and copying the observable/object reference to process it in the function). You will just copy the observable reference (not the emissions of the observable) hence it's shouldn't be a big deal for performance reasons.
Also, ponyfill/polyfills are generally expected to be either equal to or worst than the actual implementation in terms of performance. Just keep in mind that the spreading operator will copy only the properties of the object (not any nested property).
I would leave some comment in that ponyfill function to make it easier to understand for every other developer that works with your codebase.
The documentation isn't helpful enough for me to understand the difference between these.
It's like concatMap, but maps each value always to the same inner
Observable.
http://reactivex.io/rxjs/file/es6/operators/concatMapTo.js.html
I tried checking out learnrxjs.io's examples on stackblitz, but even with those, I wasn't able to immediately identify what the distinguishing feature was separating these.
FYI i saw this other similar question
What is the difference between mergeMap and mergeMapTo?
but the answer in there wasn't satisfactory, because in the learnrxjs.io examples, they clearly map to observables, not hard-coded values.
https://www.learnrxjs.io/operators/transformation/concatmapto.html
If someone could provide some examples (and maybe a brief explanation) to help distinguish the *** vs the ***To higher-order observable operators, I'd appreciate that, thanks.
Simply said, variants with *To will always use the same Observable that needs to be created when the whole chain is created regardless of the values emitted by the chain. They take an Observable as a parameter.
Variants without *To can create and return any Observable only when their source Observable emits. They take a callback as a parameter.
For example when I use mergeMapTo I'm always subscribing to the same Observable:
source.pipe(
mergeMapTo(of(1)),
)
Every emission from source will always be mapped to of(1) and there's no way I can change that.
One the other hand with just mergeMap I can return any Observable I want depending on the value received:
source.pipe(
mergeMap(v => of(v * 2)),
)
Maybe an easier way to think about this is to remember that *To variants map a value to a constant (even when it's not a "real JavaScript constant").
I am aggregating a bunch of enum values (different from the ordinal values) in a foreach loop.
int output = 0;
for (TestEnum testEnum: setOfEnums) {
output |= testEnum.getValue();
}
Is there a way to do this in streams API?
If I use a lambda like this in a Stream<TestEnum> :
setOfEnums.stream().forEach(testEnum -> (output |= testEnum.getValue());
I get a compile time error that says, 'variable used in lambda should be effectively final'.
Predicate represents a boolean valued function, you need to use reduce method of stream to aggregate bunch of enum values.
if we consider that you have HashSet as named SetOfEnums :
//int initialValue = 0; //this is effectively final for next stream pipeline if you wont modify this value in that stream
final int initialValue = 0;//final
int output = SetOfEnums.stream()
.map(TestEnum::getValue)
.reduce(initialValue, (e1,e2)-> e1|e2);
You nedd to reduce stream of enums like this:
int output = Arrays.stream(TestEnum.values()).mapToInt(TestEnum::getValue).reduce(0, (acc, value) -> acc | value);
I like the recommendations to use reduction, but perhaps a more complete answer would illustrate why it is a good idea.
In a lambda expression, you can reference variables like output that are in scope where the lambda expression is defined, but you cannot modify the values. The reason for that is that, internally, the compiler must be able to implement your lambda, if it chooses to do so, by creating a new function with your lambda as its body. The compiler may choose to add parameters as needed so that all of the values used in this generated function are available in the parameter list. In your case, such a function would definitely have the lambda's explicit parameter, testEnum, but because you also reference the local variable output in the lambda body, it could add that as a second parameter to the generated function. Effectively, the compiler might generate this function from your lambda:
private void generatedFunction1(TestEnum testEnum, int output) {
output |= testEnum.getValue();
}
As you can see, the output parameter is a copy of the output variable used by the caller, and the OR operation would only be applied to the copy. Since the original output variable wouldn't be modified, the language designers decided to prohibit modification of values passed implicitly to lambdas.
To get around the problem in the most direct way, setting aside for the moment that the use of reduction is a far better approach, you could wrap the output variable in a wrapper (e.g. an int[] array of size 1 or an AtomicInteger. The wrapper's reference would be passed by value to the generated function, and since you would now update the contents of output, not the value of output, output remains effectively final, so the compiler won't complain. For example:
AtomicInteger output = new AtomicInteger();
setOfEnums.stream().forEach(testEnum -> (output.set(output.get() | testEnum.getValue()));
or, since we're using AtomicInteger, we may as well make it thread-safe in case you later choose to use a parallel Stream,
AtomicInteger output = new AtomicInteger();
setOfEnums.stream().forEach(testEnum -> (output.getAndUpdate(prev -> prev | testEnum.getValue())));
Now that we've gone over an answer that most resembles what you asked about, we can talk about the superior solution of using reduction, that other answers have already recommended.
There are two kinds of reduction offered by Stream, stateless reduction (reduce(), and stateful reduction (collect()). To visualize the difference, consider a conveyer belt delivering hamburgers, and your goal is to collect all of the hamburger patties into one big hamburger. With stateful reduction, you would start with a new hamburger bun, and then collect the patty out of each hamburger as it arrives, and you add it to the stack of patties in the hamburger bun you set up to collect them. In stateless reduction, you start out with an empty hamburger bun (called the "identity", since that empty hamburger bun is what you end up with if the conveyer belt is empty), and as each hamburger arrives on the belt, you make a copy of the previous accumulated burger and add the patty from the new one that just arrived, discarding the previous accumulated burger.
The stateless reduction may seem like a huge waste, but there are cases when copying the accumulated value is very cheap. One such case is when accumulating primitive types -- primitive types are very cheap to copy, so stateless reduction is ideal when crunching primitives in applications such as summing, ORing, etc.
So, using stateless reduction, your example might become:
setOfEnums.stream()
.mapToInt(TestEnum::getValue) // or .mapToInt(testEnum -> testEnum.getValue())
.reduce(0, (resultSoFar, testEnum) -> resultSoFar | testEnum);
Some points to ponder:
Your original for loop is probably faster than using streams, except perhaps if your set is very large and you use parallel streams. Don't use streams for the sake of using streams. Use them if they make sense.
In my first example, I showed the use of Stream.forEach(). If you ever find yourself creating a Stream and just calling forEach(), it is more efficient just to call forEach() on the collection directly.
You didn't mention what kind of Set you are using, but I hope you are using EnumSet<TestEnum>. Because it is implemented as a bit field, It performs much better (O(1)) than any other kind of Set for all operations, even copying. EnumSet.noneOf(TestEnum.class) creates an empty Set, EnumSet.allOf(TestEnum.class) gives you a set of all enum values, etc.
I have a two dimentional array of BehaviorSubject<number>s. For debugging purposes I want to write the values in a formatted manner as soon as all the array cells emit value. So I wrote this:
Observable.zip(universe.map(row => Observable.zip(row)))
.takeUntil(stopper)
.subscribe(u =>
console.log(`[\n\t[${u.map(r => r.toString()).join("],\n\t[")}]\n]`))
Nothing written. And also this hasn't work:
Observable.zip(universe[0])
.takeUntil(stopper)
.subscribe(u => console.log(`1[${u.toString()}]`))
But these following worked (the array has 5 columns):
Observable.zip(universe[0][0], universe[0][1], universe[0][2], universe[0][3], universe[0][4])
.takeUntil(stopper)
.subscribe(u => console.log(`2[${u.toString()}]`))
Observable.zip(Observable.zip(Observable.zip(Observable.zip(universe[0][0], universe[0][1]), universe[0][2]), universe[0][3]), universe[0][4])
.takeUntil(stopper)
.subscribe(u => console.log(`3[${u.toString()}]`))
Also I have considered .zipAll() operator but there is no document about it.
This may be a bug in Observable.zip() code as it shows ArrayLike<BehaviorSubject<number>> as possible argument type in code assistance.
So is there any other way to get this functionality? How can I get the array values written down once all of the values are reassigned, without knowing the actual dimensions of course?
The important thing is that zip() operator doesn't take an array of Observables but an unpacked series of Observables instead.
That's why Observable.zip([obs1, obs2, obs3]) doesn't work.
But Observable.zip(obs1, obs2, obs3) works.
It's not possible to help you when we don't know what universe is. From what you have now it seems like you could use destructuring assignment (assuming you're using ES6 or TypeScript):
Observable.zip(...universe[0]);
I don't know what plans are with zipAll() but right now it just callls zip().
As of rxjs#5.0.3 Observable.zip() function implementation does not recognize Observable arrays even though export declare function zipStatic<T>(array: ObservableInput<T>[]): Observable<T[]>; and export declare function zipStatic<T>(...observables: Array<ObservableInput<T>>): Observable<T[]>; function declarations take place in rxjs/operator.zip.d.ts (What is the difference between this declarations is beyond my Type/Javascript knowledge). It simply pumps the argument object members passed to it to a local array and never flattens them if you pass array. And even does not check parameter types to raise the situation.
After receiving #martin's answer above, I changed the calls to Observable.zip() with Observable.zip.apply(null, observable_array) then the problem is solved. But .zip() should accept (at least one) array of Observables in order to help readability and adhere to aforementioned function declarations.