I'm dealing with a big json with a lot of editable values (*big means > 1000), entirely rendered on the same page, so my state is simply { data: bigBigJson }.
The initial rendering is quite long but it's ok.
The problem is that when an input triggers an onChange (and a redux action), the value is updated in the state, and the whole rendering happens again.
I wonder how people deal with that? Is there simple solutions (even not necessarily best practices).
Notes:
The json document is provided by an external API, I can't change it
I could separate the state in several sub-states (it's a multiple levels json), but hoping for a simpler/faster solution (I know it would probably be a best practice though)
I'm using react and redux, not immutable.js but everything is immutable (obviously)
––
Update (about DSS answer)
• (Case 1) Let's say the state is:
{
data: {
key1: value1,
// ...
key1000: value1000
}
}
If keyN is updated, all the state would be re-rendered anyway right? The reducer would return something like:
{
data: {
...state.data,
keyN: newValueN
}
That's one thing but it's not really my case.
• (Case 2) The state is more like (over simplified):
{
data: {
dataSet1: {
key1: value1,
// ...
key10: value1000
},
// ...
dataSet100: {
key1: value1,
// ...
key10: value1000
}
}
}
If dataN.keyN is updated, I would return in the reducer
{
data: {
...state.data,
dataN: {
...state.data.dataN,
keyN: newValueN
}
}
}
I guess i'm doing something wrong as it doesn't look really nice.
Would it change anything like that:
// state
{
dataSet1: {
key1: value1,
// ...
key10: value1000
},
// ...
dataSet100: {
key1: value1,
// ...
key10: value1000
}
}
// reducer
{
...state,
dataN: {
...state.dataN,
keyN: newValueN
}
}
Finally, just to be more specific about my case, here is more what my reducer looks like (still a bit simplified):
import get from 'lodash/fp/get'
import set from 'lodash/fp/set'
// ...
// reducer:
// path = 'values[3].values[4].values[0]'
return {
data: set(path, {
...get(path, state.data),
value: newValue
}, state.data)
}
• In case you are wondering, i can't just use:
data: set(path + '.value', newValue, state.data)
as other properties needs to be updated as well.
The reason everything gets rerendered is because everything in your store changes. It may look the same. All properties may have the same values. But all object references have changed. That is to say that even if two objects have the same properties, they still have separate identities.
Since React-Redux uses object identity to figure out if an object has changed, you should always make sure to use the same object reference whenever an object has not changed. Since Redux state must be immutable, using the old object in the new state is a guaranteed not to cause problems. Immutable objects can be reused in the same way an integer or a string can be reused.
To solve your dilemma, you can, in your reducer, go over the JSON and the store state sub objects and compare them. If they are the same, make sure to use the store object. By reusing the same object React-Redux will make sure the components that represent those objects will not be rerendered. This means that if only one of those 1000 objects changes, only one component will update.
Also make sure to use the React key property correctly. Each of those 1000 items needs its own ID that stays the same from JSON to JSON.
Finally, consider making your state itself more amenable to such updates. You could transform the JSON when loading and updating the state. You could store the items keyed by ID for instance which would make the update process a lot faster.
Related
In our Single Page Application we've developed a centralized store class that uses an RxJS behavior subject to handle the state of our application and all its mutation. Several components in our application are subscribing to our store's behavior subject in order to receive any update to current application state. This state is then bound to UI so that whenever state changes, UI reflect those changes. Whenever a component wants to change a part of the state, we call a function exposed by our store that does the required work and updates the state calling next on the behavior subject. So far nothing special. (We're using Aurelia as a framework which performs 2 way binding)
The issue we are facing is that as soon as a component changes it's local state variable it receives from the store, other components gets updated even if next() wasn't called on the subejct itself.
We also tried to subscribe on an observable version of the subject since observable are supposed to send a different copy of the data to all subscriber but looks like it's not the case.
Looks like all subject subscriber are receiving a reference of the object stored in the behavior subject.
import { BehaviorSubject, of } from 'rxjs';
const initialState = {
data: {
id: 1,
description: 'initial'
}
}
const subject = new BehaviorSubject(initialState);
const observable = subject.asObservable();
let stateFromSubject; //Result after subscription to subject
let stateFromObservable; //Result after subscription to observable
subject.subscribe((val) => {
console.log(`**Received ${val.data.id} from subject`);
stateFromSubject = val;
});
observable.subscribe((val) => {
console.log(`**Received ${val.data.id} from observable`);
stateFromObservable = val;
});
stateFromSubject.data.id = 2;
// Both stateFromObservable and subject.getValue() now have a id of 2.
// next() wasn't called on the subject but its state got changed anyway
stateFromObservable.data.id = 3;
// Since observable aren't bi-directional I thought this would be a possible solution but same applies and all variable now shows 3
I've made a stackblitz with the code above.
https://stackblitz.com/edit/rxjs-bhkd5n
The only workaround we have so far is to clone the sate in some of our subscriber where we support edition through binding like follow:
observable.subscribe((val) => {
stateFromObservable = JSON.parse(JSON.stringify(val));
});
But this feels more like a hack than a real solution. There must be a better way...
Yes, all subscribers receive the same instance of the object in the behavior subject, that is how behavior subjects work. If you are going to mutate the objects you need to clone them.
I use this function to clone my objects I am going to bind to Angular forms
const clone = obj =>
Array.isArray(obj)
? obj.map(item => clone(item))
: obj instanceof Date
? new Date(obj.getTime())
: obj && typeof obj === 'object'
? Object.getOwnPropertyNames(obj).reduce((o, prop) => {
o[prop] = clone(obj[prop]);
return o;
}, {})
: obj;
So if you have an observable data$ you can create an observable clone$ where subscribers to that observable get a clone that can be mutated without affecting other components.
clone$ = data$.pipe(map(data => clone(data)));
So components that are just displaying data can subscribe to data$ for efficiency and ones that will mutate the data can subscribe to clone$.
Have a read on my library for Angular https://github.com/adriandavidbrand/ngx-rxcache and my article on it https://medium.com/#adrianbrand/angular-state-management-with-rxcache-468a865fc3fb it goes into the need to clone objects so we don't mutate data we bind to forms.
It sounds like the goals of your store are the same as my Angular state management library. It might give you some ideas.
I am not familar with Aurelia or if it has pipes but that clone function is available in the store with exposing my data with a clone$ observable and in the templates with a clone pipe that can be used like
data$ | clone as data
The important part is knowing when to clone and not to clone. You only need to clone if the data is going to be mutated. It would be really inefficient to clone an array of data that is only going to be displayed in a grid.
The only workaround we have so far is to clone the state in some of our subscriber where we support edition through binding like follow:
I don't think I can answer that without rewriting your store.
const initialState = {
data: {
id: 1,
description: 'initial'
}
}
That state object has deeply structured data. Everytime you need to mutate the state the object needs to be reconstructed.
Alternatively,
const initialState = {
1: {id: 1, description: 'initial'},
2: {id: 2, description: 'initial'},
3: {id: 3, description: 'initial'},
_index: [1, 2, 3]
};
That is about as deep of a state object that I would create. Use a key/value pair to map between IDs and the object values. You can now write selectors easily.
function getById(id: number): Observable<any> {
return subject.pipe(
map(state => state[id]),
distinctUntilChanged()
);
}
function getIds(): Observable<number[]> {
return subject.pipe(
map(state => state._index),
distinctUntilChanged()
);
}
When you want change a data object. You have to reconstruct the state and also set the data.
function append(data: Object) {
const state = subject.value;
subject.next({...state, [data.id]: Object.freeze(data), _index: [...state._index, data.id]});
}
function remove(id: number) {
const state = {...subject.value};
delete state[id];
subject.next({...state, _index: state._index.filter(x => x !== id)});
}
Once you have that done. You should freeze downstream consumers of your state object.
const subject = new BehaviorSubject(initialState);
function getStore(): Observable<any> {
return subject.pipe(
map(obj => Object.freeze(obj))
);
}
function getById(id: number): Observable<any> {
return getStore().pipe(
map(state => state[id]),
distinctUntilChanged()
);
}
function getIds(): Observable<number[]> {
return getStore().pipe(
map(state => state._index),
distinctUntilChanged()
);
}
Later when you do something like this:
stateFromSubject.data.id = 2;
You'll get a run-time error.
FYI: The above is written in TypeScript
The big logical issue with your example is that the object forwarded by the subject is actually a single object reference. RxJS doesn't do anything out of the box to create clones for you, and that is fine otherwise it would result in unnecessary operations by default if they aren't needed.
So while you can clone the value received by the subscribers, you're still not save for access of BehaviorSubject.getValue(), which would return the original reference. Besides that having same refs for parts of your state is actually beneficial in lots of ways as e.g arrays can be re-used for multiple displaying components vs having to rebuild them from scratch.
What you want to do instead is to leverage a single-source-of-truth pattern, similar to Redux, where instead of making sure that subscribers get clones, you're treating your state as immutable object. That means every modification results in a new state. That further means you should restrict modifications to actions, (actions + reducers in Redux) which construct a new state of the current plus the necessary changes and return the new copy.
Now all of that might sound like a lot of work but you should take a look at the official Aurelia Store Plugin, which is sharing pretty much the same concept as you have plus making sure that best ideas of Redux are brought over to the world of Aurelia.
I have a filtered list of items based on a getAllItems query, which takes a filter and an order by option as arguments.
After creating a new item, I want to delete the cache for this query, no matter what variables were passed. I don't know how to do this.
I don't think updating the cache is an option. Methods mentionned in Apollo Client documentation (Updating the cache after a mutation, refetchQueries and update) all seem to need a given set of variables, but since the filter is a complex object (with some text information), I would need to update the cache for every given set of variables that were previously submitted. I don't know how to do this. Plus, only the server does know how this new item impact pagination and ordering.
I don't think fetch-policy (for instance setting it to cache-and-network) is what I'm looking for, because if accessing the network is what I want after having created a new item, when I'm just filtering the list (typing in a string to search), I want to stay with the default behavior (cache-only).
client.resetStore would reset the store for all type of queries (not only the getAllItems query), so I don't think it's what I'm looking for either.
I'm pretty sure I'm missing something here.
There's no officially supported way of doing this in the current version of Apollo but there is a workaround.
In your update function, after creating an item, you can iterate through the cache and delete all nodes where the key starts with the typename you are trying to remove from the cache. e.g.
// Loop through all the data in our cache
// And delete any items where the key start with "Item"
// This empties the cache of all of our items and
// forces a refetch of the data only when it is next requested.
Object.keys(cache.data.data).forEach(key =>
key.match(/^Item/) && cache.data.delete(key)
)
This works for queries that exist a number of times in the cache with different variables, i.e. paginated queries.
I wrote an article on Medium that goes in to much more detail on how this works as well as an implementation example and alternative solution that is more complicated but works better in a small number of use cases. Since this article goes in to more detail on a concept I have already explained in this answer, I believe it is ok to share here: https://medium.com/#martinseanhunt/how-to-invalidate-cached-data-in-apollo-and-handle-updating-paginated-queries-379e4b9e4698
this worked for me (requires apollo 2 for cache eviction feature) - clears query matched by regexp from cache
after clearing cache query will be automatically refeteched without need to trigger refetch manually (if you are using angular: gql.watch().valueChanges will perform xhr request and emit new value)
export const deleteQueryFromCache = (cache: any, matcher: string | RegExp): void => {
const rootQuery = cache.data.data.ROOT_QUERY;
Object.keys(rootQuery).forEach(key => {
if (key.match(matcher)) {
cache.evict({ id: "ROOT_QUERY", fieldName: key })
}
});
}
ngrx like
resolvers = {
removeTask(
parent,
{ id },
{ cache, getCacheKey }: { cache: InMemoryCache | any; getCacheKey: any }
) {
const key = getCacheKey({ __typename: "Task", id });
const { [key]: deleted, ...data } = cache.data.data;
cache.data.data = { ...data };
return id;
}
}
I have an initial state which looks like this (simplified for the purpose of this question):
export default {
anObject: {
parameters: {
param1:'Foo',
param2:'Bar'
},
someOtherProperty:'value'
}
};
And I have a reducer for anObject part of which deals with changes to parameter. I have an action which passed the id of the parameter to change, along with the newValue for that parameter. The reducer (again, very slightly simplified) looks like this:
import * as types from '../actions/actionTypes';
import initialState from './initialState';
export default function anObjectReducer(state = initialState.anObject, action){
switch(action.type){
case types.UPDATE_PARAMETER:
return Object.assign(
{},
state,
{
parameters:Object.assign(
{},
state.parameters,
{ [action.id]: action.newValue })
});
default:
return state;
}
}
This reducer looks wrong to me. I assumed I would be able to just do it like this:
case types.UPDATE_PARAMETER:
return Object.assign({},state,{parameters:{[action.id]:action.newValue}});
But this seems to wipe out all the other parameters and just update the single one being changed. Am I missing something obvious about how to structure my reducer?
In case it's relevant this is how I set up my root reducer:
import { combineReducers } from 'redux';
import anObject from './anObjectReducer';
export default combineReducers({
anObject
});
I thought there might be a way to compose reducers for the individual parts of each object - ie separately for parameters and someOtherProperty part of anObject in my example?
The reason why it wipes out other parameters is because you don't pass the previous values in the Object.assign.
You should have done that:
return Object.assign({}, state, {
parameters: Object.assign({}, { [action.id]: action.newValue }),
});
Or with the ES6 spread syntax: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator
return {
...state,
parameters: {
...state.parameters,
[action.id]: action.newValue,
}
}
You can:
restructure your reducers: you may use combineReducers not for store's root only. This way store stays the same as well as actions but reducers for nested object become lightweight
restructure your state(and reducers) to have it as flat as possible. it'd be more efforts needed but typically it also make easier to fetch data in mapStateToProps. normalizr should help to make transition easier by encapsulating integration with existing API that
requires specific data structure
use immer to write code like you're mutating state. This is definitely bad idea if you are learning React, but I'd consider using it on small real projects
I'm just getting started with RXJS to see if it can replace my currently manual data streams. One thing I'm trying to port is a situation whereby the last value in the stream is remembered, so future observers will always get the 'current' value as well as subsequent ones. This seems to be fulfilled by BehaviorSubject.
However, I need to do this for a group of entities. For example, I might have data that represents a message from a user:
{ userId: 1, message: "Hello!" }
And I want a BehaviorSubject-like object that'll store the last message for all users. Is this something I can do with RXJS out-of-the-box, or would I need to do it myself? (If so, any pointers would be appreciated).
EDIT: After some more thought, it perhaps seems logical to having an 'incoming' subject, an observer that updates a Map, and then a function which I can call which initialises an Observable from the map values, and merges with the incoming stream...?
I use RxJS with a redux-like state setup. I have a BehaviorSubject that holds the current state, and every time an event/action is fired that current state gets passed through functions that produce a new state, which the subject is subscribed to.
Here's a simplified version of what I use:
export default class Flux {
constructor(state) {
//all resources are saved here for disposal (for example, when hot loading)
//this is just a flux dispatcher
this.dispatcher = new rx.Subject(),
// BehaviorSuject constructed with initial state
this.state = new Rx.BehaviorSubject(state),
}
addStore(store, initialState, feature = store.feature) {
this.dispatcher
.share()
// this is where the "reduction" happens. store is a reducer that
// takes an existing state and returns the new state
.flatMap(({action, payload}) =>
store(this.state.getValue(), action, payload))
.startWith(initialState || {})
.subscribe(this.state)
);
return this;
}
addActions(actions: rx.Subject) {
// actions are fed to the dispatcher
this.resources.push(actions.share().subscribe(this.dispatcher));
return this;
}
}
I create a global Flux object with manages state. Then, for every "feature" or "page" or whatever I wish I add actions and stores. It makes managing state very easy, and things like time-travel are something that are a given with Rx.
I have some data stored on Client side by Session.set(...) (which then is rendered into a template).
This data is changing dynamically... on Server side, how can i synchronize it, so client would update templates any time data is changing on the server? Best method would be Publish/Subscribe, but it's designed for use with database.
this is what i end up so far:
if (Meteor.isClient) {
Session.setDefault('dynamicArray', [{text: "item1"},{text: "item2"}]);
Template.body.helpers({
dynamicData: function(){
return Session.get('dynamicArray');
}
});
// place for code to sync dynamicArray with server
}
if (Meteor.isServer) {
Meteor.startup(function () {
var dynamicArray = [{text: "item3"},{text: "item4"},{text: "item5"}];
// place for code to publish dynamicArray for client
});
}
Regarding your comment, you will need to creata a DynamicData Collection first, located outside the .isClient and .isServer conditionals. From there, .find() will allow you to collect data from the server in the form of a cursor, which can be iterated through using {{#each dynamicData}}. An example of how you might set up the collection and the helper is as follows:
DynamicData = new Collection('dynamicData'); //Sets up new Collection
if (Meteor.isClient) {
Template.body.helpers({
dynamicData: function(){
return DynamicData.find({}, {fields: {dynamicArray: [item1, item2, item3]})
}
});
}
Of course, this depends on how the document(s) you are retrieving are structured and what you are using them for. For instance, if you're only looking to return a single dynamicArray you might be better off using:
return DynamicData.findOne({}, {fields: {dynamicArray: [item1, item2, item3]}).dynamicArray;
...since this will return the array [item1, item2, item3] directly. This seems to be what you're looking for, since I had used the same method to replace an initial over-reliance on session data to sync information. Rather, the key point is to make server info available to the client through the helpers, which will bypass the need to sync via session data. Hope this helps.