My question is regarding how to organize my reducers.
Lets say I have a state shape like this:
resources: [
{
id: 1,
name: "John",
/* ... 100 more properties ... */
},
{
id: 1,
name: "Daniel",
/* ... 100 more properties ... */
},
]
events: {
planned: [
{
resourceId: 1,
name: "Concert with Adele",
/* ... 100 more properties ... */
}
]
}
As a start, let's say we have a single Reducer in place. The business logic is as such:
One can add and remove a resource at any time.
If there's an event without a resource, the first resource will be added to the event.
If there's an event with a resource and the resource is removed and there are no other resources, the resourceId will be unset.
If there's an event with a resource and the resource is removed and there are other resources, the resourceId will be set to the first of the remaining resources.
There is a lot of logic that is in place to handle the "resources" branch of the state tree, and simultaneously there's a lot of logic in place to handle the "events" part of the state tree.
Points 2,3 and 4 introduce a dependency between "resources" and "events", that I'm trying to resolve in the best possible way.
Naive solution
The naive solution would be to have one reducer to handle all of the actions. In that way, when a resource is removed, we can simply remove the resource, check if there are any other resources left and then update the events accordingly. However, keeping logic for 'resources' and 'events' together, just because of this feels not nice.
Maintain own list
Another alternative would be to split handling of Resources and Events to two different reducers. In the Events part of the state tree, we could keep a list of available resourceIds, so that we know how to update our events. We can still listen to the same events as the Resource reducer, but only saving the relevant data. Something like
resources: [
{
id: 1,
name: "John",
/* ... 100 more properties ... */
},
{
id: 2,
name: "Daniel",
/* ... 100 more properties ... */
},
]
events: {
resourceIds: [1,2], /* ADDED: Keeping track of available resources */
planned: [
{
resourceId: 1,
name: "Concert with Adele",
/* ... 100 more properties ... */
}
]
}
Proxy actions
A third option would be to create a listener (with for instance redux-saga) for REMOVE_RESOURCE actions, and then trigger another action UPDATE_DEFAULT_RESOURCE_ID which would contain the current default resource id.
Any thoughts on what the right approach here is?
I personally love to split my code in ducks and use a normalized state as you stated in your option 2. Whenever I need to do some complex logic involving two or more reducers, I use redux-saga as you mentioned in your third option.
Doing this way I keep the "inter reducers" logic separate and more easily testable via sagas. Also, you can't ending up with a duck circle.
In general, when you see a circle dependency it's often a strong signal of a poor design.
Chain reducers so that an action that updates your events uses the resources reducer, which separates the concerns of the two while maintaining the dependency.
event_reducer(state, action) {
switch(action.type) {
case 'REMOVE_RESOURCE_FROM_EVENT':
return {...state, resource_id: resource_reducer(null, action)}
}
}
resource_reducer(state, action) {
switch (action.type) {
case 'REMOVE_RESOURCE_FROM_EVENT':
let id = null;
if (resources.length > 0) {
id = resources[0].id;
resources = resources.slice(1);
}
return id;
}
}
Related
Context
This problem is likely predicated on certain choices, some of which are changeable and some of which are not. We are using the following technologies and frameworks:
Relay / React / TypeScript
ContentStack (CMS)
Problem
I'm attempting to create a highly customizable page that can be built from multiple kinds of UI components based on the data presented to it (to allow pages to be built using a CMS using prefab UI in an unpredictable order).
My first attempt at this was to create a set of fragments for the potential UI components that may be referenced in an array:
query CustomPageQuery {
title
description
customContentConnection {
edges {
node {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
"""
Further fragments are added here as we add more kinds of UI
"""
}
}
}
}
In the CMS we're using (ContentStack), the complexity of this query has grown to the point that it is rejected because it requires too many calls to the database in a single query. For that reason, I'm hoping there's a way I can split up the calls for the fragments so that they are not part of the initial query, or some similar solution that results in splitting up this query into multiple pieces.
I was hoping the #defer directive would solve this for me, but it's not supported by relay-compiler.
Any ideas?
Sadly #defer is still not a standard so it is not supported by most implementation (since you would also need the server to support it).
I am not sure if I understand the problem correctly, but you might want to look more toward using #skip or #include to only fetch the fragment you need depending on the type of the thing. But it would require the frontend to know what it wants to query beforehand.
query CustomPageQuery($hero: Boolean, $tweet: Boolean, $video: Boolean) {
title
description
customContentConnection {
edges {
node {
... HeroFragment #include(if: $hero)
... TweetBlockFragment #include(if: $tweet)
... EmbeddedVideoFragment #include(if: $video)
}
}
}
}
Generally you want to be able to discriminate the type without having to do a database query. So say:
type Hero {
id: ID
name: String
}
type Tweet {
id: ID
content: String
}
union Content = Hero | Tweet
{
Content: {
__resolveType: (parent, ctx) => {
// That should be able to resolve the type without a DB query
},
}
}
Once that is passed, each fragment is then resolved, making more database queries. If those are not properly batched with dataloaders then you have a N+1 problem. I am not sure how much control (if at all) you have on the backend but there is no silver bullet for your problem.
If you can't make optimizations on the backend then I would suggest trying to limit the connection. They seem to be using cursor based pagination, so you start with say first: 10 and once the first batch is returned, you can query the next elements by setting the after to the last cursor of the previous batch:
query CustomPageQuery($after: String) {
customContentConnection(first: 10, after: $after) {
edges {
cursor
node {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
}
}
pageInfo {
hasNextPage
}
}
}
As a last resort, you could try to first fetch all the IDs and then do subsequent queries to the CMS for each id (using aliases I guess) or type (if you can filter on the connection field). But I feel dirty just writing it so avoid it if you can.
{
one: node(id: "UUID1") {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
}
two: node(id: "UUID2") {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
}
}
I'm dealing with a big json with a lot of editable values (*big means > 1000), entirely rendered on the same page, so my state is simply { data: bigBigJson }.
The initial rendering is quite long but it's ok.
The problem is that when an input triggers an onChange (and a redux action), the value is updated in the state, and the whole rendering happens again.
I wonder how people deal with that? Is there simple solutions (even not necessarily best practices).
Notes:
The json document is provided by an external API, I can't change it
I could separate the state in several sub-states (it's a multiple levels json), but hoping for a simpler/faster solution (I know it would probably be a best practice though)
I'm using react and redux, not immutable.js but everything is immutable (obviously)
––
Update (about DSS answer)
• (Case 1) Let's say the state is:
{
data: {
key1: value1,
// ...
key1000: value1000
}
}
If keyN is updated, all the state would be re-rendered anyway right? The reducer would return something like:
{
data: {
...state.data,
keyN: newValueN
}
That's one thing but it's not really my case.
• (Case 2) The state is more like (over simplified):
{
data: {
dataSet1: {
key1: value1,
// ...
key10: value1000
},
// ...
dataSet100: {
key1: value1,
// ...
key10: value1000
}
}
}
If dataN.keyN is updated, I would return in the reducer
{
data: {
...state.data,
dataN: {
...state.data.dataN,
keyN: newValueN
}
}
}
I guess i'm doing something wrong as it doesn't look really nice.
Would it change anything like that:
// state
{
dataSet1: {
key1: value1,
// ...
key10: value1000
},
// ...
dataSet100: {
key1: value1,
// ...
key10: value1000
}
}
// reducer
{
...state,
dataN: {
...state.dataN,
keyN: newValueN
}
}
Finally, just to be more specific about my case, here is more what my reducer looks like (still a bit simplified):
import get from 'lodash/fp/get'
import set from 'lodash/fp/set'
// ...
// reducer:
// path = 'values[3].values[4].values[0]'
return {
data: set(path, {
...get(path, state.data),
value: newValue
}, state.data)
}
• In case you are wondering, i can't just use:
data: set(path + '.value', newValue, state.data)
as other properties needs to be updated as well.
The reason everything gets rerendered is because everything in your store changes. It may look the same. All properties may have the same values. But all object references have changed. That is to say that even if two objects have the same properties, they still have separate identities.
Since React-Redux uses object identity to figure out if an object has changed, you should always make sure to use the same object reference whenever an object has not changed. Since Redux state must be immutable, using the old object in the new state is a guaranteed not to cause problems. Immutable objects can be reused in the same way an integer or a string can be reused.
To solve your dilemma, you can, in your reducer, go over the JSON and the store state sub objects and compare them. If they are the same, make sure to use the store object. By reusing the same object React-Redux will make sure the components that represent those objects will not be rerendered. This means that if only one of those 1000 objects changes, only one component will update.
Also make sure to use the React key property correctly. Each of those 1000 items needs its own ID that stays the same from JSON to JSON.
Finally, consider making your state itself more amenable to such updates. You could transform the JSON when loading and updating the state. You could store the items keyed by ID for instance which would make the update process a lot faster.
What is the best way to 'reorder' a connection in RelayJS?
In my user interface, I allow my user to 'swap' two items, but creating a mutation around that is a bit tricky.
What I'm doing right now is the naive way, namely using FIELDS_CHANGE to change my node itself.
It works, but the problem is I can't seem to write an optimistic update for it. I am able to just pass a list of ids to my graphql server, but that doesn't work for the optimistic update because it expects the actual data.
So I guess I have to mock out my 'connection' interface, but unfortunately, it still doesn't work. I 'copied' my reordered nodes to getOptimisticResponse but it seems to be ignored. The data matches the actual server response. (ids simplified)
original:
{
item: {
edges: {
{cursor: 1, node: {id:2}}
{cursor: 2, node: {id:1}}
}
}
}
(doesn't do anything):
optimistic reponse:
{
item: {
edges: {
node: {id:1}
node: {id:2}
}
}
}
server reponse:
{
item: {
edges: {
{cursor: 1, node: {id:1}}
{cursor: 2, node: {id:2}}
}
}
}
What gives? It's equivalent (except for the cursor), and even if I add the cursor in, it still doesn't work.
What am I doing wrong? Also is there an easier way to do mock my ids to a connection?
Also, as an aside, is there a way to get this data piecemeal? Right now, reordering two item re-requests the whole list because of my mutation config. I suppose I can do it with RANGE_ADD, and RANGE_DELETE to 'simulate a swap` but is there any easier way to do it?
Since you trigger a mutation in response to the user reordering the items, I assume, you store the position or order of the items on the server side. For what you're doing, one way of creating optimistic response can be using that position or order information. On the server side, an item needs to provide an additional field position. On the client side, the items displayed are sorted by position.
When the user swaps two items, in the optimistic response of your client-side mutation, you just need to swap the position fields of those two items. The same applies on the server-side mutation.
The optimistic response code can be like:
getOptimisticResponse() {
return {
item1: {
id: this.props.item1.id,
position: this.props.item2.position,
},
item2: {
id: this.props.item2.id,
position: this.props.item1.position,
},
};
}
If I have an array of arrays like this
{
parent: [
{
name: 'stu',
children: [
{name: 'bob'},
{name: 'sarah'}
]
},
{
...
}
]
}
and I want to cycle through each parent and cycle through their children in series, so that I don't start the next parent until all the children have been processed (some long asynchronous process), how do I do this with RxJS?
I have attempted this:
var doChildren = function (parent) {
console.log('process parent', parent.name);
rx.Observable.fromArray(parent.children)
.subscribe(function (child) {
console.log('process child', child.name);
// do some Asynchronous stuff in here
});
};
rx.Observable.fromArray(parents)
.subscribe(doChildren);
But I get all the parents logging out and then all the children.
concatMap works better here. Because if iterating children is async, the order of child will be messed up. concatMap can ensure to finish one parent at a time.
Rx.Observable.from(parents)
.concatMap(function (p) {
return Rx.Observable.from(p.children)
})
.subscribe();
It looks like this was asked a while ago, but here's one way to deal with this scenario:
Rx.Observable.fromArray(parents)
.flatMap(function(parent) {
return parent.children;
})
.flatMap(function(child) {
return doSomeAsyncThing(child); //returns a promise or observable
})
.subscribe(function(result) {
// results of your async child things here.
});
The idea is to leverage flatMap which will take any Arrays, promises or observables returned and "flatten" them into an observable of individual things.
I think you might also benefit from flat mapping the results of your async things you're doing with the child nodes, so I've added that in there. Then you can just subscribe to the results.
I still feel like this question is lacking some context, but hopefully this is what you're looking for.
I'm trying to learn Backbone by looking at an app that someone I know made together with the backbone documentation. The app has a Bucket model and a Company model (i.e. you put companies in the bucket). There's one thing in this bit that I'm unclear about, namely, how it uses the trigger method.
Backbone documentation has this to say about trigger:
trigger object.trigger(event, [*args])
Trigger callbacks for the given event, or space-delimited list of events. Subsequent arguments to trigger will be passed along to the event callbacks.
In the code I'm looking at, trigger is called like this:
this.trigger("add:companies", Companies.get(companyId));
Two questions:
The event I assume is adding a company, but at what point in the code below does that actually happen? Is it when this.set({ "companies": arr }, { silent: true }); is run or when this.save(); is run (or something else)?
If Companies.get(companyId) is the optional argument, what function is it actually passed to?
Excerpt from original code
window.Bucket = Backbone.Model.extend({
defaults: function() {
return {
companies: []
};
},
addCompany: function(companyId) {
var arr = this.get("companies");
arr.push(companyId);
this.set({ "companies": arr }, { silent: true });
this.save();
this.trigger("add:companies", Companies.get(companyId));
},
// ...
The companies property of the bucket is being updated in the addCompany method you describe. An annotated version of your example shows what's taking place:
// 1. get array of companies currently included in the bucket:
var arr = this.get("companies");
// 2. add a company to the array
arr.push(companyId);
// 3. replace the bucket's company list with the array,
// suppressing validation and events:
this.set({"companies": arr}, {silent: true});
// 4. save the bucket:
this.save();
trigger isn't actually affecting the model--it's just a way of letting other pieces of the application know that the company has been added. You could turn around and catch it somewhere else using on with the bucket model:
var bucket = new window.Bucket();
// do stuff
bucket.on('add:companies', function(company) {
alert('a new company has been added to the bucket.');
});