How is it possible with RXJS to make a cascaded forEach loop? Currently, I have 4 observables containing simple string lists, called x1 - x4. What I want to achieve now is to run over all variation and to call a REST-Api with an object of variation data. Usually, I would do something like that with a forEach, but how to do with RXJS? Please see the abstracted code:
let x1$ = of([1,2]);
let x2$ = of([a,b,c,d,e,f]);
let x3$ = of([A,B,C,D,E,F]);
let x4$ = of([M,N,O,P]);
x1$.forEach(x1 => {
x2$.forEach(x2 => {
x3$.forEach(x3 => {
x4$.forEach(x4 => {
let data = {
a: x1,
b: x2,
c: x3,
d: x4
}
return this.restService.post('/xxxx', data)
})
})
})
})
Is something like that possible with RXJS in an elegant way?
Let's assume you have a function combineLists which represent the plain-array version of the logic to turn static lists into an array of request observables:
function combineLists(lists: unknown[][]) {
const [x1s, x2s, x3s, x4s] = lists;
// Calculate combinations, you can also use your forEach instead
const combinations = x1s
.flatMap(a => x2s
.flatMap(b => x3s
.flatMap(c => x4s
.flatMap(d => ({a, b, c, d})))));
return combinations.map(combination => this.restService.post('/xxxx', combination));
}
Since your input observables are one-offs as well, we can use e.g. forkJoin. This waits for all of them to complete and then runs with their respective plain values. At this point you're back to computing the combinations with your preferred method.
forkJoin([x1$, x2$, x3$, x4$]).pipe(
map(combineLists),
);
Assuming your REST call is typed to return T, the above produces Observable<Observable<T>[]>. How you proceed from here depends on what data structure you're looking for / how you want to continue working with this. This didn't seem to be part of your question anymore, but I'll give a couple hints nonetheless:
If you want a Observable<T>, you can just add e.g. a mergeAll() operator. This observable will just emit the results of all individual requests after another in whichever order they arrive.
forkJoin([x1$, x2$, x3$, x4$]).pipe(
map(combineLists),
mergeAll(),
);
If you want an Observable<T[]> instead, which collects the results into a single emission, you could once again forkJoin the produced array of requests. This also preserves the order.
forkJoin([x1$, x2$, x3$, x4$]).pipe(
map(combineLists),
switchMap(forkJoin),
);
Some words of caution:
Don't forget to subscribe to make it actually do something.
You should make sure to handle errors on all your REST calls. This must happen right at the call itself, not after this entire pipeline, unless you want one single failed request to break the entire pipe.
Keep in mind that forkJoin([]) over an empty array doesn't emit anything.
Triggering a lot of requests like this probably means the API should be changed (if possible) as the number of requests grows exponentially.
In the application combineLatest is used to combine three observables:
class SomeComponent {
private heightProvider = new SubjectProvider<any>(this);
private marginsProvider = new SubjectProvider<any>(this);
private domainProvider = new SubjectProvider<any>(this);
arbitraryMethod(): void {
combineLatest([
this.heightProvider.value$,
this.marginsProvider.value$,
this.domainProvider.value$
]).pipe(
map(([height, margins, domain]) => {
// ...
}
}
setHeight(height: number): void {
this.heightProvider.next(height);
}
setMargins(margins: {}): void {
this.marginsProvider.next(margins);
}
setDomain(domain: []): void {
this.domainProvider.next(domain);
}
}
However, I've noticed a few times already that I am sometimes forgetting to set one of these observables.
Is there a way I can build in error handeling that throws to console once one of these isn't set?
Observables aren't typically 'set' or 'not set'. I'm not sure what you mean by this. If you have a predicate that can check your observables, here is how you might use it.
// predicate
function notSet(o: Observable<any>): Boolean{
//...
}
scale$: Observable<any> = defer(() => {
const combining = [
this.heightProvider.value$,
this.marginsProvider.value$,
this.domainProvider.value$
];
const allSet = !combining.find(notSet)
if(!allSet) console.log("Not Set Error");
return !allSet?
EMPTY :
combineLatest(combining).pipe(
map(([height, margins, domain]) => {
// ...
}
Update
Ensursing source observables have emitted
If I understand your problem properly, you want to throw an error if any of your source observables haven't emitted yet. At its heart, this feels like a simple problem, but it happens to be a problem for which there doesn't exist a single general solution.
Your solution has to be domain-specific to some extent.
A simplified example of a similar problem
What you're asking a similar to this:
How do I throw an error if 'add' isn't invoked with a second number?
const add = (a: number) => (b: number): number => {
// How do I throw an error if this function
// isn't invoked with a second number?
return a + b;
}
/***********
* Example 1
***********/
// add is being called with one number
const add5 = add(5);
...
/* More code here */
...
// add is being called with a second number
const result = add5(50);
console.log(result); // Prints "55"
/***********
* Example 2
***********/
const result = add(5)(20); // Add is being called with both numbers
console.log(result); // Prints "55"
/***********
* Example 3
***********/
// add is being called with one number
const add5 = add(5);
...
/* More code here */
...
// add was never given a second number
return
// Add throws an error? How?
How can you write add such that it throws an error if the second number isn't 'set'? Well, there's no simple answer. add doesn't know the future and can't guess whether that second number was forgotten or will still be set in the future. To add, those two scenarios look the same.
One solution is to re-write add so that it must take both parameters at once. If either is missing, throw an error:
const add = (a: number, b: number): number => {
if(a != null && b != null){
return a + b;
}
throw "add: invalid argument error";
}
This solution fundamentally changes how add works. This solution doesn't work if I have a requirement that add must take its arguments one at a time.
If I want add to keep that behaviour, perhaps I can set a timer and throw an error if the second argument isn't given fast enough.
const add = (a: number) => {
const t = setTimeout(
() => throw "add: argument timeout error"),
1000 // wait 1 second
);
return (b: number): number => {
clearTimeout(t); // cancel the error
return a + b;
}
}
Now add takes its arguments one at a time, but is a timeout really how I want this to work? Maybe I only care that add is given a second parameter before some other event (an API call returns or a user navigates away from the page) or something.
Hopefully, you can begin to understand how such a "simple" problem has only domain-specific solutions.
Observables
Your question, as writ, doesn't tell us enough about what you're trying to accomplish to guess what behaviour you want.
Observables have a lot of power built into them to allow you to design a solution specific to your needs. It's almost certain that you can throw an error if one of your observables isn't set, but first, you must define what this even means.
Is it not set quickly enough? Is it not set in time for a certain function call? Not set when an event is raised? Never set? How would you like to define never? When the program is shut down?
Maybe you could switch your Subjects for BehaviourSubjects so that they MUST always have a value set (sort of like add taking both arguments at once instead of one at a time).
All of these things (and many many many more) are possible.
Q: can RxJs operators be used to flatten an array, transform items, then unflatten it, whilst maintaining a continuous stream (not completing)?
For the simplified example here: https://stackblitz.com/edit/rxjs-a1791p?file=index.ts
If following the approach:
mergeMap(next => next),
switchMap(next => of(***transforming logic***)),
toArray()
then the observable does not complete, and the values do not come through. A take(1) could be added but this is intended to be a continuous stream.
If using:
mergeMap(next => next),
switchMap(next => of(***transforming logic***)),
scan()
then this works great. However, then each time the source observable emits, the accumulator never resets, so the scan() which is intended to accumulate the values back into an array ends up combining multiple arrays from each pass. Can the accumulator be reset?
Obviously it can be accomplished with:
switchMap(next => of(next.map(***transforming logic***)))
But my real-world example is an awful lot more complicated than this, and is tied into NgRx.
Here would be one approach:
src$.pipe(
mergeMap(
arr => from(arr)
.pipe(
switchMap(item => /* ... */),
toArray(),
)
)
)
For each emitted array, mergeMap will create an inner observable(from(..)). There, from(array) will emit each item separately, allowing you to perform some logic in switchMap. Attaching toArray() at the end will give you an array with the results from switchMap's inner observable.
You don't need to use mergeMap or switchMap here. You would only need those if you are doing something asynchronously. Like if you were taking the input value and creating an observable (ex: to make an http call).
By using of inside of mergeMap, you are essentially starting with an Observable, taking the unpacked value (an array), then turning it back into an Observable.
From your stack blitz:
The reason your first strategy doesn't complete is because toArray() is happening on the level of the source (clicksFromToArrayButton), and that is never going to complete.
If you really wanted to, you could nest it up a level, so that toArray() happens on the level of your array (created with from(), which will complete after all values are emitted).
const transformedMaleNames = maleNames.pipe(
mergeMap(next => from(next).pipe(
map(next => {
const parts = next.name.split(' ');
return { firstName: parts[0], lastName: parts[1] };
}),
toArray()
)
),
);
But... we don't really need to use from to create an observable, just so it can complete, just so toArray() can put it back together for you. We can use the regular map operator instead of mergeMap, along with Array.map():
const transformedMaleNames = maleNames.pipe(
map(nextArray => {
return nextArray.map(next => {
const parts = next.name.split(' ');
return { firstName: parts[0], lastName: parts[1] };
})
})
);
this works, but isn't necessarily utilizing RxJS operators fully?
Well, ya gotta use the right tool for the right job! In this case, you are simply transforming array elements, so Array.map() is perfect for this.
But my real-world example is an awful lot more complicated than this
If you are concerned about the code getting messy, you can just break the transformation logic out into it's own function:
const transformedMaleNames = maleNames.pipe(
map(next => next.map(transformName))
);
function transformName(next) {
const parts = next.name.split(' ');
return { firstName: parts[0], lastName: parts[1] };
}
Here's a working StackBlitz.
I am trying to understand why share RxJs operator works differently if the source Observable is created with range instead of timer.
Changing the original code to:
const source = range(1, 1)
.pipe(
share()
)
const example = source.pipe(
tap(() => console.log('***SIDE EFFECT***')),
mapTo('***RESULT***'),
)
const sharedExample = example
const subscribeThree = sharedExample.subscribe(val => console.log(val))
const subscribeFour = sharedExample.subscribe(val => console.log(val))
Results in:
console.log src/pipeline/foo.spec.ts:223
SIDE EFFECT
console.log src/pipeline/foo.spec.ts:228
RESULT
console.log src/pipeline/foo.spec.ts:223
SIDE EFFECT
console.log src/pipeline/foo.spec.ts:229
RESULT
Basically, the side effect is invoked more than once.
As far as I know range is supposed to be a cold observable but it is said that share should turn cold observables to hot.
What is the explanation behind this behaviour ?
Two things to point out.
First, if you look closely at the function signature for range, you'll see it takes a third parameter, a SchedulerLike.
If unspecified, RxJS calls the next handler of each subscriber immediately with the relevant value for the range observable until it's exhausted. This isn't desirable if you intend to use the share operator, because it effectively bypasses any shared side effect processing that might be introduced.
Relevant snippet taken from the actual implementation:
// src/internal/observable/range.ts#L53
do {
if (index++ >= count) {
subscriber.complete();
break;
}
subscriber.next(current++);
if (subscriber.closed) {
break;
}
} while (true);
timer also takes an optional SchedulerLike argument. If unspecified, the implementation adopts AsyncScheduler by default, different to the default for range.
Secondly, the share operator should follow all other operators that might have side effects. If it precedes them, the expected unifying behaviour of pipe operator processing is lost.
So, with both points in mind, to make the share operator work with range as you're expecting:
const { asyncScheduler, range, timer } = rxjs;
const { mapTo, tap, share } = rxjs.operators;
// Pass in an `AsyncScheduler` to prevent immediate `next` handler calls
const source = range(1, 1, asyncScheduler).pipe(
tap(() => console.log('***SIDE EFFECT***')),
mapTo('***RESULT***'),
// All preceding operators will be in shared processing
share(),
);
const sub3 = source.subscribe(console.log);
const sub4 = source.subscribe(console.log);
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/6.4.0/rxjs.umd.min.js"></script>
I just started to work in a new project working with TypeScript. I'm comming from another project that also worked with TypeScript. Since the native for of loop in TypeScript is avaible we decided (old project team) to use this one. Espacialy for me it was much more convenient to write the for of loop, relating to my java background.
Now in the new project they use everywhere the _.foreach() loop to iterate over arrays.
What I am wondering, is there a performance difference between the native typescript for of and the _.foreach()
i've created a little test in jsperf they seam to be more or less exactly same speed...
https://jsperf.com/foreach-vs-forof/12
TypeScript For of
for (let num: string of list){
console.log(num);
}
In JavaScript
var list = "9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9".split();
//Transpiled TypeScript for of | **19,937 ±5.04%
for (var _i = 0, list_1 = list; _i < list_1.length; _i++) {
var num = list_1[_i];
console.log("" + num);
}
//lodash | 20,520 ±1.22%
_.forEach(list, function(item) {
console.log("" + item)
});
Imho i would prefer the "native" for of from TypeScript cause its more readable for me.
What do you guys suggest to use? Are there other points to use for of or better _.forEach
I don't have any experience with typescript beyond my reading but I do have quite a bit of experience with ES6/ES2015. for of was and still is part of the ES2015 spec which was finalized. I would read this article on for of from MDN.
Here are some similarities and differences of for of and forEach (and these are just as far as I have found and know of currently):
forEach in lodash works on collections that are Arrays, Objects, or
Strings.
native forEach works on Arrays, Maps, and Sets.
for of works on all Iterables: Arrays, Strings, TypedArrays,
Maps, Sets, DOM collections, and generators.
I would read this chapter on for of from Exploring ES6 (Exploring ES6 is a great read. It's very thorough. It's free online as well.) Some things from it that stand out to me as different about for of that aren't in forEach.
break and continue work inside for-of loops
break and continue aren't exposed in forEach. The closest thing you can get to continue in forEach is using return which is actually pretty much the same thing. As for break though I see no alternative (but don't discount lodash, because most things that need breaks like finding and returning a single item are already covered in much of lodash's library).
It should also be noted that the await keyword from async/await is usable inside for of where as forEach makes it quite a bit harder to stop the surrounding block from waiting on Promises awaited within the forEach block however it is possible to use forEach although using a map or reduce may make awaiting much simpler than forEach (depending on your familiarity with those functions). Below is three separate implementations of awaiting promises both sequentially and in parallel using, for of, forEach, and reduce and map just to see the possible differences.
const timeout = ms => new Promise(res => setTimeout(() => res(ms), ms));
const times = [100, 50, 10, 30];
async function forOf() {
console.log("running sequential forOf:");
for (const time of times) {
await timeout(time);
console.log(`waited ${time}ms`);
}
console.log("running parallel forOf:");
const promises = [];
for (const time of times) {
const promise = timeout(time).then(function(ms) {
console.log(`waited ${ms}ms`);
});
promises.push(promise);
}
await Promise.all(promises);
};
async function forEach() {
console.log("running sequential forEach:");
let promise = Promise.resolve();
times.forEach(function(time) {
promise = promise.then(async function() {
await timeout(time);
console.log(`waited ${time}ms`);
});
});
await promise;
console.log("running parallel forEach:");
const promises = [];
times.forEach(function(time) {
const promise = timeout(time).then(function(ms) {
console.log(`waited ${ms}ms`);
});
promises.push(promise);
});
await Promise.all(promises);
};
async function reduceAndMap() {
console.log("running sequential reduce:");
const promise = times.reduce(function(promise, time) {
return promise.then(async function() {
await timeout(time);
console.log(`waited ${time}ms`);
});
}, Promise.resolve());
await promise;
console.log("running parallel map:");
const promises = times.map(async function(time) {
const ms = await timeout(time)
console.log(`waited ${ms}ms`);
});
await Promise.all(promises);
}
forOf().then(async function() {
await forEach();
await reduceAndMap();
}).then(function() {
console.log("done");
});
With Object.entries which arrived in ES2017 you can even iterate objects own enumerable properties and values with ease and accuracy. If you want to use it now you can with one of the polyfills here. Here's an example of what that would look like.
var obj = {foo: "bar", baz: "qux"};
for (let x of Object.entries(obj)) { // OK
console.log(x); // logs ["foo", "bar"] then ["baz", "qux"]
}
and here's an implementation with a quick polyfill I wrote. You would normally use array destructuring as well which would seperate the key and value into it's own variables like this:
var obj = {foo: "bar", baz: "qux"};
for (let [key, val] of Object.entries(obj)) { // OK
console.log(key + " " + val); // logs "foo bar" then "baz qux"
}
You can also use Object.entries with forEach like so:
var obj = {foo: "bar", baz: "qux"};
console.log("without array destructuring");
Object.entries(obj).forEach((x) => { // OK
const key = x[0], val = x[1];
console.log(key + " " + val); // logs "foo bar" then "baz qux"
});
console.log("with array destructuring");
Object.entries(obj).forEach(([key, val]) => { // OK
console.log(key + " " + val); // logs "foo bar" then "baz qux"
});
forEach's first argument defaults to the type of functionality you would get from let in a for or for of loop which is a good thing. What I mean by that is if there is anything asynchronous going on inside the variable for that iteration will be scoped to just the particular part of that loop. This property of forEach is not really to do with let, but with scope and closures of functions in JavaScript, and the alternative is due to their not being block scoping. For example see what happens here when var is used:
const arr = [1,2,3,4,5,6,7,8,9];
for(var item of arr) {
setTimeout(() => {
console.log(item);
}, 100);
}
As opposed to when let or foreach is used.
const arr = [1,2,3,4,5,6,7,8,9];
const timeout = 100;
console.log('for of');
for(let item of arr) {
setTimeout(() => {
console.log(item);
}, timeout);
}
setTimeout(() => {
console.log('foreach');
arr.forEach((item) => {
setTimeout(() => {
console.log(item);
}, timeout);
})
}, timeout*arr.length);
Again, I will note the difference between using var and using let or foreach. The difference is that var's variable is hoisted up to the top of the function scope (or file if it's not in a function) and then the value is reassigned for that whole scope, so the loop reaches its end and assigns item for the last time and then every settimeout function logs that last item. Whereas with let and foreach the variable item does not get overwritten, because item is scoped to the block (when let is used) or the function (when foreach is used).
Between forEach and for of you just need to decide which one is best for the current job (e.g. Do you need breaks or need to use Maps, Sets or Generators use for of). Besides that I feel like there aren't particularly strong reasons for either on collections they both operate on with their core functionalities. Also when dealing with collections that can use either forEach or for of it's mainly just up to personal preference as they do the same thing at about the same speed (and the speeds could change at any time according to the interpreter). I feel the particular advantages of lodash are for its other various functions which could actually save you a lot of time from writing the code yourself like map, reduce, filter, and find. Since you feel most comfortable writing for of I suggest you continue writing it that way but once you start writing in lodash using its other functions you may start to feel more comfortable writing it the lodash way.
Edit:
Looking over your code I noticed an error with your list creation. At the end you just had .split() and you should have had .split(","). You were creating a list of length 1 of the whole string and iterating one time on that string that is why the bench marks were so similar. I reran the tests. Here they are. I still wouldn't worry about the performance that much it seems to change every time it's ran.
Based on your test I added another, using the native Array.prototype.forEach :
list.forEach(function(item) {
console.log("" + item)
});
This is infact my preferred way since it is actually much easier to type. Also its closer to other things you might want to do with array e.g. map/filter etc.
Note that http://jsperf.com/foreach-vs-forof/9 all three have no plausible performance difference.
I can't comment on lodash, I haven't used it. But below is some background that may help.
'For of' was introduced in TypeScript 1.5 for looping around each element in e.g. an array list. If you examine the JS output (and depending on if you are targeting ECMA Script 5 or 6), you should find that in the case of ECMASCript5 the output of both the below will be identical. See this article for associated background reading and how targeting ES6/2015 will affect the output.
As for the Typescript implementation of ForEach, there is an interesting discussion over on GitHub here on this. Especially around conditional break out of loop.
for (let line of v.lineEntry) {
}
for (var _i = 0, list_1 = list; _i < list_1.length; _i++) {
}