There are some occasions where I'm handed a model containing only some of the data I require, for example a catalog/product instances that doesn't contain certain attributes I may need to use, such as size, widget number, or waist measurement.
To alleviate this, my current options are:
Create a new block, and load the required attributes manually using addAttributeToSelect($name).
Loading the entire model in the template using the ID from the current, inadequately populated, model with, for example, Mage::getModel('catalog/product')->getId($product->getId()).
To my question: is there a way I can pick additional attributes that I'd like to load in my model collection after ->load() has been called? Also, is there a method to do this on individual models?
The right and most safe approach (but not the best - see below) is described in question - it is to load product once again.
There are no already developed methods to add more attributes, after the product is loaded, for several reasons:
1) During Model lifetime a lot its values are calculated and cached inside the Model. Thereby adding more attributes (e.g. price) will change the Model's state, but won't affect results of several methods, that are designed to return these attributes values (e.g. getPrice()), but internally do some additional preprocessing and depend on previously calculated data.
2) The Model's state will be inconsistent, as some methods will return cached and currently non-valid values, calculated on previous empty attribute, while some other methods will return non-cached values. So usage of such Model will be unsafe and its properties will be unpredictable.
3) Complexity of code to support such reloading is quite big.
Solutions
1) The first good solution (although is the heaviest one) is to load product once again, every time your block/model/helper needs extended set of attributes in it.
2) Better solution - is to load new collection with all products having all additional attributes, whenever you see, that these attributes will be required and original collection doesn't have them.
3) The best solution - is to load original product collection with all required attributes. Sometimes collections really load products with subset of possible attributes - mainly it is a legacy code for EAV optimization (now flat tables are turned 'on' by default and this optimization is not needed) or possibly when collection is loaded by search engine (e.g. Solr in Magento EE) which, by default, doesn't store all the attributes in its records.
3.1) You can add required attributes to the original collection at the place, where it is instantiated - via mentioned in question addAttributeToSelect($attributeNames) method
3.2) You can add your attributes to the list of attributes, automatically populated in a collection. Attributes lists differ from module to module, and they are stored in different places. Some are in config, others - in database. Concrete place (config or db table), where to add attributes for auto-population, depends on your concrete case.
4) Sometimes, when you need only attribute values, it maybe much easier and faster to write Resource Model, that will directly load them from DB by productIds and current storeId scope. Then you can take risk an set them as properties to Products in a collection or safely set them to Products as myAdditionalAttribuesValuesArray property or use as independent array, mapped to product ids.
Related
I'll have some items in a model's database table that I more often that not won't want to include in queries for that model. So, rather than querying to exclude these items everywhere I call for the model, either directly or via a relationship, it would be nice to tell Laravel 'in one place' to exclude these items from all collections. The criteria for excluding will be a column value.
Perhaps somewhere in the model I can put this criteria?
Ideally the solution will also provide a way to easily explicitly re-include those excluded items in collections, at the point of querying.
Laravel's model scopes are almost there, but I need it the over way around. Perhaps scopes will solve the second part of my quest (in the paragraph above this one).
I found the answer: Global Scopes. https://laravel.com/docs/8.x/eloquent#global-scopes
I was previously looking at an older Laravel version's doc, which didn't have Global Scopes.
I have 2 Models:
Product
Currency
There is no relationship between the corresponding tables of both models in the database (and there shouldn't be).
When I request a Product I need to return it's price in multiple Currencies. For that I will need to read all currencies from the database using the Currency Model.
Should I select records from both Models inside a method in Product's Controller and then calculate the prices using properties from the objects read from the database or should I read the currencies from inside a method in Product's Model and then do the same operation?
Your priorities should be 1) eliminating duplication of code and 2) organizing code in a way that will be easy to maintain. If every time you look up a product, you'll also want its price in multiple currencies, then a Product class method called findProductWithPrices is perfectly appropriate. If it's only ever going to be needed from the one endpoint in ProductController, then placing the logic in the controller will probably make it easier to keep track of.
Another option, depending on how often you'll end up doing these calculations, would be to add a prices attribute to Product with a json type, and make it a cached dictionary of currency->price. Then you only have to do these calculations (and update prices) when the price of a product changes.
in my data model I take a statement of a user with hashtags, each hashtag is turned into a node and their co-occurrence is the relationship between them. For each relationship I need to take into account:
the user who created it rel.user property
the time it was created - rel.timestamp property
the context it was created in - rel.context property
the statement it was made in - rel.statement property
Now, Neo4J doesn't allow relationship property indexing and so when I do the search that requires me to retrieve and evaluate those properties, it takes a very long time. Specifically, when I do a Cypher request of the kind:
MERGE hashtag1-[rel:TO
{context:"deb659c0-a18d-11e3-ace9-1fa4c6cf2894",
statement:"824acc80-aaa6-11e3-88e3-453baabaa7ed",
user:"b9745f70-a13f-11e3-98c5-476729c16049"}]->hashtag2
ON CREATE
SET
rel.uid="824f6061-aaa6-11e3-88e3-453baabaa7ed",
rel.timestamp="13947117878770000";
This request first checks if there is a relationship with those properties and if there is, it won't do anything, but if there is none, it will add a new one (with a unique ID and timestamp). So then because evaluation of each relationship has to take place – and they are not indexed – it takes a very long time for this request to go through. Now I'm having a problem with such request because I'm dealing with about 100 nodes and 300 relations at one query (the one above is only 1 type, there are also a few others added to the query but those above are the most expensive ones).
Therefore the actual question:
Does anybody know of a good way to keep those relationship properties and to somehow make them work faster, so they can be retrieved and evaluated when needed faster? Or do you think I should use a different type of request (if yes, which?)
Thank you!
This almost looks to me as if you relationship should actually be a node, which then would be connected to nodes:
context
user
statement
tag1
tag2
tagN
Then you can have sensible merge options (.e.g merge on UID).
Currently you loose the power of the graph model for your relationships.
This is also discussed in the graph-databases book in the chapter with the email domain.
Do you already have your hashtag1 and hashtag2 nodes available?
And if so, how many rels already exist between these?
What Cypher has to do for this to work, is to go over each of those relationships and compare all 3 properties (which I'm not sure will fit into shortstring storage) so they have to be loaded if they are not in the cache. You can check your store files, if you have a large string-store file then those uid's might not fit into the property records and have to be loaded separately.
What is the memory config of your server (heap and mmio)?
All that adds up.
The small web application I am working on is becoming bigger and bigger. I've noticed that when posting forms or just calling other functions I've passed parameters that consist of IDs or a whole instance of a Model class.
In a performance stand point, is it better for me to pass the whole Model object (filled with values) or should I pass the ID, then retrieve from the database?
Thanks!
For Performance benefits, you can do lot of things, common things are
1) Fetch as many as records which are needed, e.g. customized paging, in LINQ use (skip and take methods)
2) Use Data caching in controllers and Cache dependencies for Lists which are bound with View
3) Use Compiled query to fetch records. (see here)
Apply all these and see the mark-able page load speed.
EDIt: For IDs recommendations, In this question, Both will be same performance impact if you pass only ID and fetch rest of the model from database OR pass filled model.
Do not solve problems which do not exist yet. Use a tool to measure the performance problem and then try to solve.
It is always best to consider these from the use case.
For example, if I want to get an item by ID, then I pass the ID, not the whole object with the ID filled out.
I use WCF services to host my BLL and interface to my DAL, so passing data around is a costly exercise, so I do it sparingly.
If I need to update an object, I pass the object, if I just want to perform an action on an object, such as delete or get, I use the ID.
Si
I have some data being pulled in from an Entity model. This contains attributes of items, let's say car parts with max-speed, weight and size. Since there are a lot of parts and the base attributes never change, I've cached all the records.
Depending on the car these parts are used in, these attributes might now be changed, so I setup a new car, copy the values from the cached item "Engine" to the new car object and then add "TurboCharger", which boosts max speed, weight and size of the Engine.
The problem I'm running into is that it seems that the Entity model is still tracking the context back to the cached data. So when weight is increased by the local method, it increases it for all users. I tried adding "MergeOption.NoTracking" to my context as this is supposed to remove all entity tracking, but it still seems to be tracking back. If I turn off the cache, it works fine as it pulls fresh values from the database each time.
If I want to copy a record from my entity model, is there a way I can say "Copy the object but treat it as a standard object with no history of coming from entity" so that once my car has the attributes from an item, it is just a flattened object?
Cheers!
Im not too sure about MergeOption.NoTracking on the whole context and exactly what that does but what you can do as an alternative is to add .AsNoTracking() into your query from the database. This will definitely return a detached object.
Take a look here for some details on AsNoTracking usage : http://blog.staticvoid.co.nz/2012/04/entity-framework-and-asnotracking.html.
The other thing is to make sure you enumerate your collection before you insert to the cache to ensure that you arent acting within the queriable, ie use .ToArray().
The other option is to manually detach the object from the context (using Detach(T entity)).