what is the performance difference between updating a single parent widget with multiple children
versus updating each child on their own.
which is more performant and by how much :
StreamBuilder(
stream:sameStream,
builder(ctx,snapshot){
return Column(
children:[
Text("1"),
Text("2"),
Text("3")
]
)
}
)
Or
Column(
children:[
StreamBuilder(
stream:sameStream,
builder(ctx,snapshot){
return Text("1"):
}
),
StreamBuilder(
stream:sameStream,
builder(ctx,snapshot){
return Text("2"):
}
),
StreamBuilder(
stream:sameStream,
builder(ctx,snapshot){
return Text("3"):
}
)
]
)
An Other question : What happens if we scale the children widgets to 100 ? does the performance change ?
Having more Listeners to the same Stream (or any other state) will decrease the performance.
Check this benchmark when adding Listeners
value notifier benchmarks : https://github.com/knaeckeKami/changenotifier_benchmark
ChangeNotifier benchmarks : https://github.com/flutter/flutter/pull/62330
Having a list of expensive widgets with only one Listener will also decrease the performance (when they all get rebuild).
Anyways if you are using the same Stream with multiple children you should only use one Listener because when the state is changed they will all get a rebuild call whether they are on the same Listener or any of them have a Listener, but in the second case when the state is changed the Stream has to do more work notifying every Listener it has.
I got your problem, it occurs when you are having a list (of state) and yeah if you could have one Listener it would be faster for the notifying when the state changed but also it might be expansive to rebuild all the children which they don't have any update.
A solution for that case is using ProviderScope form Riverpod which can be used with to overwrite another provider for particular widget or using
.select method.
Related
I have some tests on an HTML table which add, modify, delete. I'd like a generic function I can apply to clean up previous data to start clean each time.
I currently reset the page, but there's quite a few steps to take to get to the start of testing so an "undo" function would be very useful WRT faster tests.
This is currently what I have (simplified) for a single row
cy.get('tr').should('have.length', 3).eq(0).click()
cy.get('tr').should('have.length', 2)
Now I need to enhance it to handle any number of rows. I tried looping but it didn't work - the test seems to run too fast for the page to keep up, if that makes sense?
To delete rows from a table is tricky if the DOM gets re-written each time you delete.
At minimum use a .should() assertion on the number of rows after each delete, to ensure each step is complete before the next one.
To be really safe, use a recursive function which controls the process, for example
const clearTable = (attempt = 0) => {
if (attempt === 100) throw 'Too many attempts' // guards against too many steps
cy.get('tbody').then($tbody => {
if($tbody.find('tr').length === 0 ) return; // exit condition tested here
cy.get('tr').then($rows => {
cy.wrap($rows).first().click() // action to delete
cy.then(() => {
clearTable(++attempt) // next step queued using then()
})
})
})
}
clearTable()
I am making a splash screen for my app. I want it to last at least N seconds before going to the main screen.
I have an Rx variable myObservable that returns data from the server or from my local cache. How do I force myObservable to complete in at least N seconds?
myObservable
// .doStuff to make it last at least N seconds
.subscribe(...)
You can use forkJoin to wait until two Observables complete:
Observable.forkJoin(myObservable, Observable.timer(N), data => data)
.subscribe(...);
For RxJS 6 without the deprecated result selector function:
forkJoin(myObservable, Observable.timer(N)).pipe(
map(([data]) => data),
)
.subscribe(...);
Edit: As mentioned in comments, Observable.timer(N) with just one parameter will complete after emitting one item so there's not need to use take(1).
Angular 7+ example of forkjoin
I like to build in a higher delay on my development system since I assume production will be slower. Observable.timer doesn't seem to be available any longer but you can use timer directly.
forkJoin(
// any observable such as your service that handles server coms
myObservable,
// or http will work like this
// this.http.get( this.url ),
// tune values for your app so very quick loads don't look strange
timer( environment.production ? 133 : 667 ),
).subscribe( ( response: any ) => {
// since we aren't remapping the response you could have multiple
// and access them in order as an array
this.dataset = response[0] || [];
// the delay is only really useful if some visual state is changing once loaded
this.loading = false;
});
I'm prototyping a fraud application. We'll frequently have metrics like "total amount of cash transactions in the last 5 days" that we need to compare against some threshold to determine if we raise an alert.
We're looking to use Kafka Streams to create and maintain the aggregates and then create an enhanced version of the incoming transaction that has the original transaction fields plus the aggregates. This enhanced record gets processed by a downstream rules system.
I'm wondering the best way to approach this. I've prototyped creating the aggregates with code like this:
TimeWindows twoDayHopping TimeWindows.of(TimeUnit.DAYS.toMillis(2))
.advanceBy(TimeUnit.DAYS.toMillis(1));
KStream<String, AdditiveStatistics> aggrStream = transactions
.filter((key,value)->{
return value.getAccountTypeDesc().equals("P") &&
value.getPrimaryMediumDesc().equals("CASH");
})
.groupByKey()
.aggregate(AdditiveStatistics::new,
(key,value,accumulator)-> {
return AdditiveStatsUtil
.advance(value.getCurrencyAmount(),accumulator),
twoDayHopping,
metricsSerde,
"sas10005_store")
}
.toStream()
.map((key,value)-> {
value.setTransDate(key.window().start());
return new KeyValue<String, AdditiveStatistics>(key.key(),value);
})
.through(Serdes.String(),metricsSerde,datedAggrTopic);;
This creates a store-backed stream that has a records per key per window. I then join the original transactions stream to this window to produce the final output to a topic:
JoinWindows joinWindow = JoinWindows.of(TimeUnit.DAYS.toMillis(1))
.before(TimeUnit.DAYS.toMillis(1))
.after(-1)
.until(TimeUnit.DAYS.toMillis(2)+1);
KStream<String,Transactions10KEnhanced> enhancedTrans = transactions.join(aggrStream,
(left,right)->{
Transactions10KEnhanced out = new Transactions10KEnhanced();
out.setAccountNumber(left.getAccountNumber());
out.setAccountTypeDesc(left.getAccountTypeDesc());
out.setPartyNumber(left.getPartyNumber());
out.setPrimaryMediumDesc(left.getPrimaryMediumDesc());
out.setSecondaryMediumDesc(left.getSecondaryMediumDesc());
out.setTransactionKey(left.getTransactionKey());
out.setCurrencyAmount(left.getCurrencyAmount());
out.setTransDate(left.getTransDate());
if(right != null) {
out.setSum2d(right.getSum());
}
return out;
},
joinWindow);
This produces the correct results, but it seems to run for quite a while, even with a low number of records. I'm wondering if there's a more efficient way to achieve the same result.
It's a config issues: cf http://docs.confluent.io/current/streams/developer-guide.html#memory-management
Disable caching by setting cache size to zero (parameter cache.max.bytes.buffering in StreamsConfig) will resolve the "delayed" delivery to the output topic.
You might also read this blog post for some background information about Streams design: https://www.confluent.io/blog/watermarks-tables-event-time-dataflow-model/
I'm building a Flux app using MartyJS (which is pretty close to "vanilla" Flux and uses the same underlying dispatcher). It contains stores with an inherent dependency relationship. For example, a UserStore tracks the current user, and an InstanceStore tracks instances of data owned by the current user. Instance data is fetched from an API asynchronously.
The question is what to do to the state of the InstanceStore when the user changes.
I've come to believe (e.g. reading answers by #fisherwebdev on SO) that it's most appropriate to make AJAX requests in the action creator function, and to have an AJAX "success" result in an action that in turn causes stores to change.
So, to fetch the user (i.e. log in), I'm making an AJAX call in the action creator function, and when it resolves, I'm dispatching a RECEIVE_USER action with the user as a payload. The UserStore listens to this and updates its state accordingly.
However, I also need to re-fetch all the data in the InstanceStore if the user is changed.
Option 1: I can listen to RECEIVE_USER in the InstanceStore, and if it is a new user, trigger an AJAX request, which in turn creates another action, which in turn causes the InstanceStore to update. The problem with this is that it feels like cascading actions, although technically it's async so the dispatcher will probably allow it.
Option 2: Another way would be for InstanceStore to listen to change events emitted by UserStore and do the request-action dance then, but this feels wrong too.
Option 3: A third way would be for the action creator to orchestrate the two AJAX calls and dispatch the two actions separately. However, now the action creator has to know a lot about how the stores relate to one another.
One of the answers in Where should ajax request be made in Flux app? makes me think option 1 is the right one, but the Flux docs also imply that stores triggering actions is not good.
Something like option 3 seems like the cleanest solution to me, followed by option 1. My reasoning:
Option 2 deviates from the expected way of handling dependencies between stores (waitfor), and you'd have to check after each change event to figure out which ones are relevant and which ones can be ignored, or start using multiple event types; it could get pretty messy.
I think option 1 is viable; as Bill Fisher remarked in the post you linked, it's OK for API calls to be made from within stores provided that the resulting data is handled by calling new Actions. But OK doesn't necessarily mean ideal, and you'd probably achieve better separation of concerns and reduce cascading if you can collect all your API calls and action initiation in one place (i.e. ActionCreators). And that would be consistent with option 3.
However, now the action creator has to know a lot about how the stores
relate to one another.
As I see it, the action creator doesn't need to know anything about what the stores are doing. It just needs to log in a user and then get the data associated with the user. Whether this is done through one API call or two, these are logically very closely coupled and make sense within the scope of one action creator. Once the user is logged in and the data is obtained, you could fire two actions (e.g. LOGGED_IN, GOT_USER_DATA) or even just one action that contains all the data needed for both. Either way, the actions are just echoing what the API calls did, and it's up to the stores to decide what to do with it.
I'd suggest using a single action to update both stores, because this seems like a perfect use case for waitfor: when one action triggers a handler in both stores, you can instruct InstanceStore to wait for UserStore's handler to finish before InstanceStore's handler executes. It would look something like this:
UserStore.dispatchToken = AppDispatcher.register(function(payload) {
switch (payload.actionType) {
case Constants.GOT_USER_DATA:
...(handle UserStore response)...
break;
...
}
});
...
InstanceStore.dispatchToken = AppDispatcher.register(function(payload) {
switch (payload.actionType) {
case Constants.GOT_USER_DATA:
AppDispatcher.waitFor([UserStore.dispatchToken]);
...(handle InstanceStore response)...
break;
...
}
});
Option 1 seems the best choice conceptually to me. There are 2 separate API calls, so you have 2 sets of events.
It's a lot of events in a small amount of code, but Flux relies always using the simple, standard Action->Store->View approach. Once you do something clever (like option 2), you've changed that. If other devs can no longer safely assume that any Action flow works the exact same as every other one, you've lost a big benefit of Flux.
It won't be the shortest approach in code though. MartyJS looks like it will be a little neater than Facebook's own Flux library at least!
A different option; if logins must always refresh the InstanceStore, why not have the login API call include all of the InstanceStore data as well?
(And taking it further; why have 2 separate stores? They seem very strongly coupled either way, and there's no reason you couldn't still make calls to the InstanceStore API without re-calling login anyway)
I usually use promises to resolve such situation.
For example:
// UserAction.js
var Marty = require( 'marty' );
var Constants = require( '../constants/UserConstants' );
var vow = require( 'vow' );
module.exports = Marty.createActionCreators({
...
handleFormEvent: function ( path, e ) {
var dfd = vow.defer();
var prom = dfd.promise();
this.dispatch( Constants.CHANGE_USER, dfd, prom );
}
});
// UserStore.js
var Marty = require( 'marty' );
var Constants = require( '../constants/UserConstants' );
module.exports = Marty.createStore({
id: 'UserStore',
handlers: {
changeUser : UserConstants.CHANGE_USER
},
changeUser: function ( dfd, __ ) {
$.ajax( /* fetch new user */ )
.then(function ( resp ) {
/* do what you need */
dfd.resolve( resp );
});
}
});
// InstanceStore.js
var Marty = require( 'marty' );
var UserConstants = require( '../constants/UserConstants' );
module.exports = Marty.createStore({
id: 'InstanceStore',
handlers: {
changeInstanceByUser : UserConstants.CHANGE_USER
},
changeInstanceByUser: function ( __, prom ) {
prom.then(function ( userData ) {
/* OK, user now is switched */
$.ajax( /* fetch new instance */ )
.then(function ( resp ) { ... });
}
});
So I'm attempting to use reactives to recompose chunked messages identified by ID and am having a problem terminating the final observable. I have a Message class which consists of Id, Total Size, Payload, Chunk Number and Type and have the following client-side code:
I need to calculate the number of messages to Take at runtime
(from messages in
(from messageArgs in Receive select Serializer.Deserialize<Message>(new MemoryStream(Encoding.UTF8.GetBytes(messageArgs.Message))))
group messages by messages.Id into grouped select grouped)
.Subscribe(g =>
{
var cache = new List<Message>();
g.TakeWhile((int) Math.Ceiling(MaxPayload/g.First().Size) < cache.Count)
.Subscribe(cache.Add,
_ => { /* Rebuild Message Parts From Cache */ });
});
First I create a grouped observable filtering messages by their unique ID and then I am trying to cache all messages in each group until I have collected them all, then I sort them and put them together. The above seems to block on g.First().
I need a way to calculate the number to take from the first (or any) of the messages that come through however am having difficulty doing so. Any help?
First is a blocking operator (how else can it return T and not IObservable<T>?)
I think using Scan (which builds an aggregate over time) could be what you need. Using Scan, you can hide the "state" of your message re-construction in a "builder" object.
MessageBuilder.IsComplete returns true when all the size of messages it has received reaches MaxPayload (or whatever your requirements are). MessageBuilder.Build() then returns the reconstructed message.
I've also moved your "message building" code into a SelectMany, which keeps the built messages within the monad.
(Apologies for reformatting the code into extension methods, I find it difficult to read/write mixed LINQ syntax)
Receive
.Select(messageArgs => Serializer.Deserialize<Message>(
new MemoryStream(Encoding.UTF8.GetBytes(messageArgs.Message))))
.GroupBy(message => message.Id)
.SelectMany(group =>
{
// Use the builder to "add" message parts to
return group.Scan(new MessageBuilder(), (builder, messagePart) =>
{
builder.AddPart(messagePart);
return builder;
})
.SkipWhile(builder => !builder.IsComplete)
.Select(builder => builder.Build());
})
.Subscribe(OnMessageReceived);