How should I add child element to parent object in react reducer - react-redux

I've got a redux parent object that I'm building based on various api calls and used across components. When I receive new api data, I add a nested object to this parent component.
My current method deepClone's the parent object, adds the new child and then calls a reducer to update the store return { ...state, parent_object: action.value };
This works to an extent although mapStateToProps is not being called when the reducer updates. Also, deepCloning the parent object every time doesn't seem to make sense.
Could someone provide some insight into the proper way to do this.

Related

How can I use cache when loading 3d models in mapbox?

I have 3D models connected to cluster-nodes. When I switch from one node to another, the model is cleared from the screen and the other model is added to the map. When I select the previous node, I want the preloaded 3d model to come from the cache.
`
https://jsfiddle.net/1auox3f8/20/
`
For that you'll need to build your own cache (a Map objetc could be ideal) that stores gltf.scene result.
You'll surely will need some functions wrapping up your object loader engine and to create a properly scoped variable for the Map cache.
The first method you need (let's call it 'loadCache', will checks
first if the key (I would suggest to use the url as a key and a
value with the Promise) in the Map already exist.
If url key doesn't exist in your Map cache, you create a new key in the Map cache, with a value that is the Promise. That promise will call another method (let's call it loadObj) receiving the url and a callback method, and it will call to the loader.load. Once loader.load creates the object gltf.scene, you return the object to loadCache and resolve the promise.
If url key already exist in your Map cache, then just get the result of it with .then
If you are going to play hard with Mapbox and three.js, I'd recommend you to take a look to threebox that already has cache implemented in this way to optimize performance and load thousands of objects among many other features. You can also check the cache logic implemented in threebox here

Should React.PureComponent be used for components that are updated frequently?

If a component needs to render several times a second because of a prop change, should this component extend React.PureComponent?
The component has no child components, however, it is itself deeply nested... so the props are travelling through several other components.
In general, what are some key things to consider when deciding if React.PureComponent should be used or not. In which scenarios is it bad to use?
Yes this sounds like a good case for PureComponent because your component is unnecessarily being re-rendered frequently with the same props.
A child component extended from React.Component will call render every time its parent calls render. If instead the child component is extended from PureComponent it will only call render when the parent passes props that don't shallowEqual the previously passed props.
It's generally safe to use PureComponent as long as
your component and its children don't rely on context updates
your component doesn't have object or array props that are directly mutated by its parents (shallowEqual will not detect these changes)

Should view call the store directly?

From the Flux's TodoMVC example, I saw the TodoApp component is asking the store to get the states.
Should the view create the action and let the dispatcher to call the store instead?
The views that are listening for the stores' "change" event are called controller-views, because they have this one controller-like aspect: whenever the stores change, they get data from the stores and pass it to their children through props.
The controller-views are the only views that should be calling the stores' getters. The getters should be the only public API that the stores expose. Stores have no setters.
It's very tempting to call the stores' getters within the render() method of some component deep in the tree, but this is an anti-pattern. It violates the unidirectional data flow, making it more difficult to understand the flow of data through the application, and it and makes your rendering more expensive.
In the TodoMVC Flux example, the TodoApp component is the only controller-view.
You should get the values from stores somehow:
Get value directly from store. E.g. postsStore.get('firstPost')
You'll not be notified on changes. So, don't use this method.
Get & Subscribe to store using lifecycle methods on component
componentWillMount: function(){
var _this = this;
myStore.subscribe(function(newValue){
_this.setState({
myValue: newValue
});
})
},
componentWillUnmount: function(){
// don't forget to unsubscribe from store here
}
Get & Subscribe to store using mixins. Usually Flux implementations gives you Mixin for it. So value from store setting to component state on changes of value in store.
example from Reflux
mixins: Reflux.connect(myStore, 'myValue'),
render: function(){
// here you have access to this.state.myValue
}
Subscribe to action. It can be useful for rendering errors, that you don't want to store. But you can use it for whatever you want.
Implementation same as previous, but instead store use action
Best way to sync with stores is to subscribe to store.
So answer to your question is:
Yes, it's ok, and No, you shouldn't call methods on stores in components.
It's ok to call methods on stores if it's pure methods (doesn't change data in store). So you can call only get methods.
But if you want (you should) to be notified on changes in store, you should subscribe to it. As manual subscribing can be added through mixins, it should use it (your own, or from flux-library). So SubscribingMixin(MyStore) calls some methods on store internally, but not you are right in component.
But if you think about reinvent Flux, notice, that there is no difference between subscribing to store and subscribing to action. So it's possible to implement it so all data will pass through actions.
View could get the states of Stores directly.
Action + Dispatcher is the flux way to change the states of the Store, not accessing existing Store data.

How to bind a domain object to a JavaFX TreeView?

How can I bind a domain object to a JavaFX TreeView? ComboBox has getItems() and you can add something to that collection. TreeView does not seem to have such a method. I could only build the tree manually by adding TreeItems to the TreeView's root and then using getChildren().add(...) to add children, but there seems no way of just adding an observable tree structure.
The domain object can read itself from a file and write itself to a file. It has methods to modify its contents. How do I best hook this up with a TreeView so that the user can add and delete nodes?
I don't want GUI code (i.e., JavaFX classes) in my domain objects.
Do I need to write an Adapter class that can turn my domain object into a JavaFX tree? Then add listeners to the tree and map the changes back to the domain object? Or is there a better way?
Some time ago I had a similar problem. I've written a custom TreeItem implementation that can handle recursive data structures. I've written a blog post with a detailed explanation here.
The code for the RecursiveTreeItem can be found as gist.
As an example think of a class Task that can contain many sub-tasks and so on.
public class Task {
private ObservableList<Task> subtasks = FXCollections.observableArrayList();
public ObservableList<Task> getSubtasks() {
return subtasks;
}
}
In this case you could use it as follows:
Task root = new Task();
TreeItem<Task> rootItem = new RecursiveTreeItem<Task>(root, Task::getSubtasks);
tree.setRoot(rootItem);
The second parameter is of type Callback<T, ObservableList<T>>: a function that takes an element of T (in our case Task) and returns the child elements for this element. In the example I've used a method reference as a shortcut.
This is a fully reactive implementation i.e. when a new sub item is added it will immediately be shown in the TreeView.
You said you don't like to have JavaFX classes in you domain model. In this case you could write something like this (not tested):
TreeItem<Task> rootItem = new RecursiveTreeItem<Task>(root,
task -> FXCollections.observableArrayList(task.getSubtasks()));
Here getSubtasks() returns a plain List<Task> that is wrapped in an observableList. But of cause in this case the TreeView won't update automatically when your model changes.

Mimicking SQL Insert Trigger with LINQ-to-SQL

Using LINQ-to-SQL, I would like to automatically create child records when inserting the parent entity. Basically, mimicking how an SQL Insert trigger would work, but in-code so that some additional processing can be done.
The parent has an association to the child, but it seems that I cannot simply add new child records during the DataContext's SubmitChanges().
For example,
public partial class Parent
{
partial void OnValidate(System.Data.Linq.ChangeAction action)
{
if(action == System.Data.Linq.ChangeAction.Insert)
{
Child c = new Child();
... set properties ...
this.Childs.Add(c);
}
}
}
This would be ideal, but unfortunately the newly created Child record is not inserted to the database. Makes sense, since the DataContext has a list of objects/statements and probably doesn't like new items being added in the middle of it.
Similarly, intercepting the partial void InsertParent(Parent instance) function in the DataContext and attempting to add the Child record yields the same result - no errors, but nothing added to the database.
Is there any way to get this sort of behaviour without adding code to the presentation layer?
Update:
Both the OnValidate() and InsertParent() functions are called from the DataContext's SubmitChanges() function. I suspect this is the inherent difficulty with what I'm trying to do - the DataContext will not allow additional objects to be inserted (e.g. through InsertOnSubmit()) while it is in the process of submitting the existing changes to the database.
Ideally I would like to keep everything under one Transaction so that, if any errors occur during the insert/update, nothing is actually changed in the database. Hence my attempts to mimic the SQL Trigger functionality, allowing the child records to be automatically inserted through a single call to the DataContext's SubmitChanges() function.
If you want it to happen just before it is saved; you can override SubmitChanges, and call GetChangeSet() to get the pending changes. Look for the things you are interested in (for example, delta.Inserts.OfType<Customer>(), and make your required changes.
Then call base.SubmitChanges(...).
Here's a related example, handling deletes.
The Add method only sets up a link between the two objects: it doesn't mark the added item for insertion into the database. For that, you need call InsertOnSubmit on the Table<Child> instance contained within your DataContext. The trouble, of course, is that there's no innate way to access your DataContext from the method you describe.
You do have access to it by implementing InsertParent in your DataContext, so I'd go that route (and use InsertOnSubmit instead of Add, of course).
EDITED I assumed that the partial method InsertParent would be called by the DataContext at some point, but in looking at my own code that method appears to be defined but never referenced by the generated class. So what's the use, I wonder?
In linq to sql you make a "trigger" by making a partial class to the dbml file, and then inserting a partial method. Here is an example that wouldn't do anything because it calls the build-in deletion.
partial void DeleteMyTable(MyTable instance)
{
//custom code here
ExecuteDynamicDelete(instance);
//or here :-)
}

Resources