How to tabulate/aggregating a total value from an array of observables using reduce/scan (in NGRX/NGXS) - rxjs

I am trying to aggregate/tabulate the results of a set of observables. I have an array of observables that each return a number and I want to total up those results and emit that as the value. Each time the source numbers change, I want the end result to reflect the new total. The problem is that I am getting the previous results added to the new total. This has to do with how I am using the reduce/scan operator. I believe it needs to be nested inside a switchMap/mergeMap, but so far I have been unable to figure out the solution.
I mocked up a simple example. It shows how many cars are owned by all users in total.
Initially, the count is correct, but when you add a car to a user, the new total includes the previous total.
https://stackblitz.com/edit/rxjs-concat-observables-3-drfd36
Any help is greatly appreciated.

Your scan works perfectly right, the point is that for each update the stream gets all data repetitively, so, the fastest way to fix I think is to set a new instance of the stream at the handleClickAddCar.
https://stackblitz.com/edit/rxjs-wrong-count.

I ended up doing this:
this.carCount$ = this.users$.pipe(
map((users: User[]): Array<Observable<number>> => {
let requests = users.map(
(user: User): Observable<number> => {
return this.store.select(UserSelectors.getCarsForUser(user)).pipe(
map((cars: Car[]): number => {
return cars.length;
})
);
}
);
return requests;
}),
flatMap((results): Observable<number> => {
return combineLatest(results).pipe(
take(1),
flatMap(data => data),
reduce((accum: number, result: number): number => {
return accum + result;
}, 0)
)
})
);
I think the take(1) ends up doing the same thing as Yasser was doing above by recreating the entire stream. I think this way is a little cleaner.
I also added another stream below it (in the code) that does one level deeper in terms of retrieving observables of observables.
https://stackblitz.com/edit/rxjs-concat-observables-working-1
Anyone have a cleaner, better way of doing this type of roll-up of observable results?

Related

Deleting rows from a table after testing, generic function

I have some tests on an HTML table which add, modify, delete. I'd like a generic function I can apply to clean up previous data to start clean each time.
I currently reset the page, but there's quite a few steps to take to get to the start of testing so an "undo" function would be very useful WRT faster tests.
This is currently what I have (simplified) for a single row
cy.get('tr').should('have.length', 3).eq(0).click()
cy.get('tr').should('have.length', 2)
Now I need to enhance it to handle any number of rows. I tried looping but it didn't work - the test seems to run too fast for the page to keep up, if that makes sense?
To delete rows from a table is tricky if the DOM gets re-written each time you delete.
At minimum use a .should() assertion on the number of rows after each delete, to ensure each step is complete before the next one.
To be really safe, use a recursive function which controls the process, for example
const clearTable = (attempt = 0) => {
if (attempt === 100) throw 'Too many attempts' // guards against too many steps
cy.get('tbody').then($tbody => {
if($tbody.find('tr').length === 0 ) return; // exit condition tested here
cy.get('tr').then($rows => {
cy.wrap($rows).first().click() // action to delete
cy.then(() => {
clearTable(++attempt) // next step queued using then()
})
})
})
}
clearTable()

using forkJoin multiple times

I am working on a Project where our client generates almost 500 request simultaneously. I am using the forkJoin to get all the responses as Array.
But the Server after 40-50 request Blocks the requests or sends only errors. I have to split these 500 requests in Chunks of 10 requests and loop over this chunks array and have to call forkJoin for each chunk, and convert observable to Promise.
Is there any way to get rid of this for loop over the chucks?
If I understand right you question, I think you are in a situation similar to this
const clientRequestParams = [params1, params2, ..., params500]
const requestAsObservables = clientRequestParams.map(params => {
return myRequest(params)
})
forkJoin(requestAsObservables).subscribe(
responses => {// do something with the array of responses}
)
and probably the problem is that the server can not load so many requests in parallel.
If my understanding is right and if, as you write, there is a limit of 10 for concurrent requests, you could try with mergeMap operator specifying also the concurrent parameter.
A solution could therefore be the following
const clientRequestParams = [params1, params2, ..., params500]
// use the from function from rxjs to create a stream of params
from(clientRequestParams).pipe(
mergeMap(params => {
return myRequest(params)
}, 10) // 10 here is the concurrent parameter which limits the number of
// concurrent requests on the fly to 10
).subscribe(
responseNotification => {
// do something with the response that you get from one invocation
// of the service in the server
}
)
If you adopt this strategy, you limit the concurrency but you are not guaranteed the order in the sequence of the responses. In other words, the second request can return before the first one has returned. So you need to find some mechanism to link the response to the request. One simple way would be to return not only the response from the server, but also the params which you used to invoke that specific request. In this case the code would look like this
const clientRequestParams = [params1, params2, ..., params500]
// use the from function from rxjs to create a stream of params
from(clientRequestParams).pipe(
mergeMap(params => {
return myRequest(params).pipe(
map(resp => {
return {resp, params}
})
)
}, 10)
).subscribe(
responseNotification => {
// do something with the response that you get from one invocation
// of the service in the server
}
)
With this implementation you would create a stream which notifies both the response received from the server and the params used in that specific invocation.
You can adopt also other strategies, e.g. return the response and the sequence number representing that response, or maybe others.

Dexie, object not found when nesting collection

i thought i got the hang of dexie, but now i'm flabbergasted:
two tables, each with a handful of records. Komps & Bretts
output all Bretts
rdb.Bretts.each(brett => {
console.log(brett);
})
output all Komps
rdb.Komps.each(komp=> {
console.log(komp);
})
BUT: this only outputs the Bretts, for some weird reason, Komps is empty
rdb.Bretts.each(brett => {
console.log(brett);
rdb.Komps.each(komp=> {
console.log(komp);
})
})
i've tried all kinds of combinations with async/await, then() etc, the inner loop cannot find any data in the inner table, whatever table i want to something with.
2nd example. This Works:
await rdb.Komps.get(163);
This produces an error ("Failed to execute 'objectStore' on 'IDBTransaction…ction': The specified object store was not found.")
rdb.Bretts.each(async brett => {
await rdb.Komps.get(163);
})
Is there some kind of locking going on? something that can be disabled?
Thank you!
Calling rdb.Bretts.each() will implicitly launch a readOnly transaction limited to 'Bretts' only. This means that within the callback you can only reach that table. And that's the reason why it doesn't find the Comps table at that point. To get access to the Comps table from within the each callback, you would need to include it in an explicit transaction block:
rdb.transaction('r', 'Komps', 'Bretts', () => {
rdb.Bretts.each(brett => {
console.log(brett);
rdb.Komps.each(komp=> {
console.log(komp);
});
});
});
However, each() does not respect promises returned by the callback, so even this fix would not be something that I would recommend either - even if it would solve your problem. You could easlily get race conditions as you loose the control of the flow when launching new each() from an each callback.
I would recommend you to using toArray(), get(), bulkGet() and other methods than each() where possible. toArray() is also faster than each() as it can utilize faster IDB Api IDBObjectStore.getAll() and IDBIndex.getAll() when possible. And you don't nescessarily need to encapsulate the code in a transaction block (unless you really need that atomicy).
const komps = await rdb.Komps.toArray();
await Promise.all(
komps.map(
async komp => {
// Do some async call per komp:
const brett = await rdb.Bretts.get(163));
console.log("brett with id 163", brett);
}
)
);
Now this example is a bit silly as it does the exact same db.Bretts.get(163) for each komp it founds, but you could replace 163 with some dynamic value there.
Conclusion: There are two issues.
The implicit transaction of Dexie's operation and the callback to each() lives within that limited transaction (tied to one single table only) unless you surround the call with a bigger explicit transaction block.
Try avoid to start new async operation within the callback of Dexie's db.Table.each() as it does not expect promises to be returned from its callback. You can do it but it is better to stick with methods where you can keep control of the async flow.

Model records ordering in Spine.js

As I can see in the Spine.js sources the Model.each() function returns Model's records in the order of their IDs. This is completely unreliable in scenarios where ordering is important: long person list etc.
Can you suggest a way to keep original records ordering (in the same order as they've arrived via refresh() or similar functions) ?
P.S.
Things are even worse because by default Spine.js internally uses new GUIDs as IDs. So records order is completely random which unacceptable.
EDIT:
Seems that in last commit https://github.com/maccman/spine/commit/116b722dd8ea9912b9906db6b70da7948c16948a
they made it possible, but I have not tested it myself because I switched from Spine to Knockout.
Bumped into the same problem learning spine.js. I'm using pure JS, so i was neglecting the the contact example http://spinejs.com/docs/example_contacts which helped out on this one. As a matter of fact, you can't really keep the ordering from the server this way, but you can do your own ordering with javascript.
Notice that i'm using the Element Pattern here. (http://spinejs.com/docs/controller_patterns)
First you set the function which is gonna do the sorting inside the model:
/*Extending the Student Model*/
Student.extend({
nameSort: function(a,b) {
if ((a.name || a.email) > (b.name || b.email))
return 1;
else
return -1
}
});
Then, in the students controller you set the elements using the sort:
/*Controller that manages the students*/
var Students = Spine.Controller.sub({
/*code ommited for simplicity*/
addOne: function(student){
var item = new StudentItem({item: student});
this.append(item.render());
},
addAll: function(){
var sortedByName = Student.all().sort(Student.nameSort);
var _self = this;
$.each(sortedByName, function(){_self.addOne(this)});
},
});
And that's it.

Using Reactives to Merge Chunked Messages

So I'm attempting to use reactives to recompose chunked messages identified by ID and am having a problem terminating the final observable. I have a Message class which consists of Id, Total Size, Payload, Chunk Number and Type and have the following client-side code:
I need to calculate the number of messages to Take at runtime
(from messages in
(from messageArgs in Receive select Serializer.Deserialize<Message>(new MemoryStream(Encoding.UTF8.GetBytes(messageArgs.Message))))
group messages by messages.Id into grouped select grouped)
.Subscribe(g =>
{
var cache = new List<Message>();
g.TakeWhile((int) Math.Ceiling(MaxPayload/g.First().Size) < cache.Count)
.Subscribe(cache.Add,
_ => { /* Rebuild Message Parts From Cache */ });
});
First I create a grouped observable filtering messages by their unique ID and then I am trying to cache all messages in each group until I have collected them all, then I sort them and put them together. The above seems to block on g.First().
I need a way to calculate the number to take from the first (or any) of the messages that come through however am having difficulty doing so. Any help?
First is a blocking operator (how else can it return T and not IObservable<T>?)
I think using Scan (which builds an aggregate over time) could be what you need. Using Scan, you can hide the "state" of your message re-construction in a "builder" object.
MessageBuilder.IsComplete returns true when all the size of messages it has received reaches MaxPayload (or whatever your requirements are). MessageBuilder.Build() then returns the reconstructed message.
I've also moved your "message building" code into a SelectMany, which keeps the built messages within the monad.
(Apologies for reformatting the code into extension methods, I find it difficult to read/write mixed LINQ syntax)
Receive
.Select(messageArgs => Serializer.Deserialize<Message>(
new MemoryStream(Encoding.UTF8.GetBytes(messageArgs.Message))))
.GroupBy(message => message.Id)
.SelectMany(group =>
{
// Use the builder to "add" message parts to
return group.Scan(new MessageBuilder(), (builder, messagePart) =>
{
builder.AddPart(messagePart);
return builder;
})
.SkipWhile(builder => !builder.IsComplete)
.Select(builder => builder.Build());
})
.Subscribe(OnMessageReceived);

Resources