I have 3D models connected to cluster-nodes. When I switch from one node to another, the model is cleared from the screen and the other model is added to the map. When I select the previous node, I want the preloaded 3d model to come from the cache.
`
https://jsfiddle.net/1auox3f8/20/
`
For that you'll need to build your own cache (a Map objetc could be ideal) that stores gltf.scene result.
You'll surely will need some functions wrapping up your object loader engine and to create a properly scoped variable for the Map cache.
The first method you need (let's call it 'loadCache', will checks
first if the key (I would suggest to use the url as a key and a
value with the Promise) in the Map already exist.
If url key doesn't exist in your Map cache, you create a new key in the Map cache, with a value that is the Promise. That promise will call another method (let's call it loadObj) receiving the url and a callback method, and it will call to the loader.load. Once loader.load creates the object gltf.scene, you return the object to loadCache and resolve the promise.
If url key already exist in your Map cache, then just get the result of it with .then
If you are going to play hard with Mapbox and three.js, I'd recommend you to take a look to threebox that already has cache implemented in this way to optimize performance and load thousands of objects among many other features. You can also check the cache logic implemented in threebox here
Related
I would like to understand what is the difference between mount and render methods in Livewire components, since I have seen examples where both are used to define initial state of variables. For instance, when you instantiate a variable with records from the model, ¿which is the right place to load the data using the ORM syntax?
The mount() method is what's called a "lifecycle-hook". There are a few more of these kind of methods in Livewire, which are outlined in the official documentation - https://laravel-livewire.com/docs/2.x/lifecycle-hooks - while the render() is the final method that's called to render the actual view.
The mount() method is the construct to the component. This is where you pass in the data that the component needs. This method is only called once, on the initialization of the component, which means its also typically where you set up initial values that aren't constants.
However, since public properties of a Livewire component can only be collections, the instance of a model, arrays or native PHP types like string and integer, you can't pass more "advanced" types that relies on a state - for example like the pagination of a query of models.
That is why you would sometimes need to pass data to the component via the render() method, like you would when returning data in a normal Laravel controller. Another reason to pass data here is that the data is not exposed in JavaScript, like the public properties of the component is.
The render() method is called at the end of every lifecycle request, but before the component dehydrates. Official documentation has more detailed information https://laravel-livewire.com/docs/2.x/rendering-components#render-method - the data defined here isn't a property of the class, and thereby not accessible in the other methods in the component.
So to answer your question, it depends on what type of data you are passing, if the data should be accessible in the other methods in the class or if it's sensitive such that it shouldn't be visible in the JavaScript object attached to the component.
mount method is like any constructor and you can use it in multiple cases, in other you don't need it. For example, if you have a nested component into a full page component, but initially the nested properties are null or require any definition, you define it there. Is used also for the route model binding definitions but you need be clear that any definition you declare here will not be updated, hydrated or suffer any changes after you initialize the component. Mostly, this is the difference with the render method.
I have an app using React + Redux + Normalizr, I would like to know the best practices to reduce the number of renders when something changes on entities.
Right if I change just one entity inside the entities, it will re-render all the components and not only the components that need that specific entity
There are several things you can do to minimize the number of renders in your app, and make updates faster.
Let's look at them from your Redux store down to your React components.
In your Redux store, you can use immutable data structures (for example, Immutable.js). This will make all subsequent optimizations faster, as you'll be able to compare changes by only checking for previous/next state slice equality rather than recursively comparing all props.
In your containers, that is in your top-level components where you inject redux state as props, ask only for the state slices you need, and use the pure option (I assume you're using react-redux) to make sure your container will be re-rendered only if the state slices returned by your mapStateToProps functions have changed.
If you need to compute derived data, that is if you inject in your containers data computed from various state slices, use a memoized function to make sure the computation is not triggered again if the input doesn't change, and to keep object equality with the value the previous call returned. Reselect is a very good library to do that.
In your dumb components use the shouldComponentUpdate lifecycle to avoid a re-render when incoming props do not change. If you do not want to implement this manually, you can use React's PureRenderMixin to check all props for you, or, for example, the pure function from the Recompose library if you need more control. A good use case at this level is rendering a list of items. If your item component implements shouldComponentUpdate only the modified items will be re-rendered. But this shouldn't be a fix-all-problems habit : a good components separation is often preferable in that it makes flow props only to those components which will need them.
As far as Normalizr is concerned, there is nothing more specific to be done.
If in some case (it should be rare) you detect performance problems that are directly related to React's rendering cycles of components, then you should implement the shouldComponentUpdate() method in the involved components (details can be found in React's docs here).
Change-detection in shouldComponentUpdate() will be particularly easy because Redux forces you to implement immutable state:
shouldComponentUpdate(nextProps, nextState) {
return nextProps.dataObject !== this.props.dataObject;
// true when dataObject has become a new object,
// which happens if (and only if) its data has changed,
// thanks to immutability
}
I am in a highly dynamic context, heavily using dynamic instantiation of components from sources. Naturally I am concerned with the overhead from having to parse those sources each and every time an object is dynamically created. When the situation allows it, I am using manual caching:
readonly property var componentCache: new Object
function create(type) {
var comp = componentCache[type]
if (comp === undefined) { // "cache miss"
comp = Qt.createComponent(type)
if (comp.status !== Component.Ready) {
console.log("Component creation failed: " + comp.errorString())
return null
} else {
componentCache[type] = comp
}
}
return comp.createObject()
}
Except that this is not always applicable, for example, using a Loader with a component which needs to specify object properties using the setSource(source, properties) function. In this scenario it is not possible to use a manually cached Component as the function only takes an url. The doc does vaguely mention "caching", but it is not exactly clear whether this cache is QML engine wide for the component from that source or more likely - just for that particular Loader.
If the active property is false at the time when this function is
called, the given source component will not be loaded but the source
and initial properties will be cached. When the loader is made active,
an instance of the source component will be created with the initial
properties set.
The question is how to deal with this issue, and is it even necessary? Maybe Qt does component from source caching by default? Caching certainly would make sense in terms of avoiding redundant source loading (from disk or worse - network), parsing and component preparation, but its effects will only be prominent in the case of excessive dynamic instantiation, and the "typical" QML dynamic object creation scenarios usually involve a one-time object, in which case the caching would be useless memory overhead. Caching also doesn't make sense in the context of the possibility that the source may change in between the instantiations.
So since I don't have the time to dig through the mess that is the private implementations behind Qt APIs, if I had to guess, I'd say that component from source caching is not likely, but as a mere guess, it may as well be wrong.
Not an answer per se, I tripped into the question of component caching yesterday – and was surprised to discover that Qt appears to cache components. At least in creating dynamic components, the log statements related to createComponent only appear once in my test app. I've searched around and haven't seen any specific info in the docs about caching. I did come across a couple of interesting methods in the QQMLEngine Class. Then I came across the release notes for Qt 5.4
The component returned by Qt.createComponent() is no longer parented
to the engine. Be sure to hold a reference, or provide a parent.
So? If a parent is provided, it is cached (?) That appears to be the case. (and for 5.5+ ?) If you want to manage it yourself, don't provide a parent and retain the reference. (?)
QQmlEngine Class
QQmlEngine::clearComponentCache() Clears the engine's
internal component cache. This function causes the property
metadata of all components previously loaded by the engine to be
destroyed. All previously loaded components and the property bindings
for all extant objects created from those components will cease to
function. This function returns the engine to a state where it
does not contain any loaded component data. This may be useful in
order to reload a smaller subset of the previous component set, or to
load a new version of a previously loaded component. Once the
component cache has been cleared, components must be loaded before any
new objects can be created.
void QQmlEngine::trimComponentCache()
Trims the engine's internal component cache.
This function causes the property metadata of any loaded components
which are not currently in use to be destroyed.
A component is considered to be in use if there are any extant
instances of the component itself, any instances of other components
that use the component, or any objects instantiated by any of those
components.
After some toying with the code, it looks like caching also takes place in the "problematic case" of Loader.setSource().
Running the example code from this question, I found out that regular component creation fails to expand a tree deeper than 10 nodes, because of Qt's hard-coded limit:
// Do not create infinite recursion in object creation
static const int maxCreationDepth = 10;
This causes component instantiation to abort if there are more than nested 10 component instantiations in a regular scenario, product incorrect tree and generating a warning.
This doesn't happen when setSource() is used, the obvious reason for this would be that the component is cached and thus no component instantiation takes place.
I have a can.Model with defined findOne, findAll methods. Sometimes however it is required to create model instances from plain objects, e.g. these objects are bootstrapped in html as globals during initial page load.
The problem is that these instances are not merged with instances stored in can.Model.store object. Moreover they are not getting stored there while they have defined id attribute. Is it expected behaviour? What is the right pattern to create model instances bootstrapped in html as variables?
Only models that are bound to (e.g. having data displayed on the page) will be added to the store. The store is only used for keeping those models. If nobody is listening to model changes there is no need to keep them stored (in fact, it would create a memory leak).
You can verify it like this:
var model = new MyModel({ name: 'David' });
model.bind('change', function() {});
When using Hibernate (JPA), if I do the following call :
MyParent parent = em.getReference("myId");
parent.getAListMappedAsOneToMany().add(record)
record.setParent(parent);
Is there any performance problem ?
My thoughts is that getReference does not load the entity and getAListeMappedAsOneToMany().add do not need to load the list as it is defined as lazy fetch.
getAListMappedAsOneToMany could return a very big list if it is really accessed (by calling get or size method).
Could you confirm that there is no performance problem with such a code ?
getReference() doesn't go to the database, and returns a proxy. But if you call a method on the proxy, it initializes the proxy and gets the entity data from the database. So since you call getAListMappedAsOneToMany() on your entity, you don't gain anything by calling getReference() instead of find().
Similarly, the list is loaded lazily. this means that it will only be loaded when you call a method on it. And you do call a method on it: add(). So the data of the elements in the list is also loaded from the database.
Turn on SQL logging in devlopment, to see and understand all the queries executed by Hibernate.
If you want to avoid loading the list, replace your code by
MyParent parent = em.getReference("myId");
record.setParent(parent);
This won't load anything from the database, and it will make the association persistent because Record.parent is the owner side of the association. But beware that this will also make your in-memory object graph inconsistent if the parent has already been loaded before.
getReference() is useful when you don't want to use any members of the object but to give the reference of the object to another object. For example, when entity A referencing entity B and you want to set your b as B of A, then getReference() is what you need.
But in your case, when you get the proxy object, you immediately try to reach one of its members. (aListMappedAsOneToMany) Thus this will result, the whole parent object will be loaded from db.
It is true that, when you getAListMappedAsOneToMany().add(record), it will not load from db yet only if you set inverse="true"
You can learn more information about performance from:
http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/performance.html#performance-collections-mostefficentinverse